_prepare_bounce_buffer also calls _allocate_dmamem and needs to have
the passed BUS_* flags handled properly, so just move the handling
to _allocate_dmamem directly.
The logic in the mixer will automatically stop the mix thread but
leave the "Started" flag set in this case.
While at it, clean up that logic.
This suffices to get the mixer to connect and disconnect from
the same output, at least, and have audio output still work.
See inline comment. As we initialize the TimeComputer with the current
system_time(), if the driver reports the played_real_time of its last
buffer exchange (which, if we're restarting media_server, could be
non-zero), we need to just ignore it.
Fixes assertion failures when using usb_audio. And now that we have
this check in here, we can remove the assert from TimeComputer.
Also add a cast in _GetControlName to appease GCC2 while at it.
Also refactor the logic to not need goto.
The previous design (from 2016) generated an event and wrote to the
control port to queue it, and then the control thread woke up our
semaphore at the appointed time. Rather than have this inefficency,
just use a timeout to the acquire_sem (which is more similar to the
pre-2016 design.)
This should not affect mixer behavior (as we wait for buffers
inside this logic already), it should only reduce the latency
of actual mixer runs.
Time out after waiting at most for 10 seconds. After 81e50deece,
this should only happen when the registrar succeeds in sending the
message but we fail in receiving it for whatever reason, so really
this is just a guard against infinite hangs.
Otherwise, if the filesystem doesn't set them, we will have garbage
values and act wrongly.
This fixes the second KDL and the underlying cause of #18838:
when the system gets into a low memory state, VFS purges unused vnodes.
But of course RAMFS keeps those nodes around. When the VFS went to
retrieve the nodes for reuse, the flags would sometimes randomly
have the "removed" flag set, and the VFS would then try to delete
the node. But of course it wasn't really removed, so we would hit
an assertion failure in RAMFS.
The sender is waiting for a reply, so if the reply fails to send
the target will hang forever. Send back an error in that case.
Should fix hangs of Tracker and Deskbar under low memory conditions
with a very large clipboard.
This can only happen if the real time or performance time values
specified are very large (more than 24 bits), which should only
happen if the time specified is system_time() and the "last" time
is 0. Under that circumstance, last_drift should be 1.0f,
so we can avoid using it at all. Otherwise, invoke debugger().
This would have caught some (but not all) of the problems fixed
in preceding commits.
BTimeSource::Now() uses the current real time to compute the
performance time, so if the performance time and last real time
are 0 in the time source data, we get a positive value that is
the same as the system time. That means we wind up waiting
a while to start the mixer unnecessarily, often equal to the
current system_time() when the mixer was started.
So, rather than checking the computed Now(), we instead check the
raw performance and real time values from the time source, and
wait for those to be valid before starting.
Also remove a comment about the BeOS R5 multi_audio node. It seems
that ours generates valid time values more quickly, but still starts
off with performance and real times of 0 (which are the default in
the time source anyway.) The new code would still work under such
broken nodes regardless.
This seems to fix sound output taking a long time to start after boot
(or even longer after restarting media services.)
Otherwise they will mess up the time computer and then the published
times, giving huge or miniscule drift values (since the time computer
already has a non-zero real-time by this point, so it will compute
a negative difference if passed 0 for the current real time.)
The "drift" value is the ratio between performance and real time,
so it must never be 0. Specifying it as such would mean that the
consumers of the time source would wind up with wait times that were
extremely large, due to doing a float divide-by-zero.
While working on the kernel timer fixes, I noticed some timer events
that had very large, but not quite infinite, timeouts; and this was
one of them.
Should not constitute a behavioral change (since the nearly-infinite
timeouts would never be hit.)
The logic in add_timer was scheduling the timer using "scheduleTime",
the originally passed value, not "event->schedule_time", which
is adjusted inside add_timer to be relative to the system_time.
This meant that if the event was the first added to the list,
we would set the hardware clock for a very long time in the future
rather than the correct duration.
Since until recently cancel_timer reset the hardware clock every run
even if the cancelled timer wasn't at the head of the list, this
problem was covered up by that one, as usually the scheduler would
cancel a timer relatively frequently, and thus the hardware timer
would usually get set to the correct value relatively frequently.
But after c5a499a74b, this was not
the case anymore as we skip updating the hardware timer if we cancelled
any timer other than the one at the head of the list, exposing this bug.
The fix is simple: don't bother storing a local "scheduleTime" variable
separate from the event->schedule_time. This makes things less confusing
anyway.
Fixes #18967.
HD currently fetches changelog and user ratings for packages using
a thread from the window. In this change, the fetching of this
data is instead performed using process coordinators in order to
make background processing behaviour consistent and to prep for
future changes.
Change-Id: I7fd0f33c4b9a63fa4b999e2909ce320296db59b9
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7928
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
The previous code (introduced in hrev57034) was correct for most accesses, but
would reject access to the last word of the configuration space using 8 or 16 bit
access.
May help with #18536
Change-Id: I3eecbdb187eca0ec57e0ce65e4d1eb0d7c43d00a
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7929
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Unless, of course, it has the B_BITMAP_CLEAR_TO_WHITE flag as well.
From my testing, not clearing the BBitmap matches BeOS's behaviour
more closely (if not exactly) compared to clearing the BBitmap.
My test program created the BBitmap and BView, drew a diagonal red line
across it, and saved the result to a file.
The results:
* BeOS - transparent background; red line with no anti-aliasing
* Haiku, current behaviour - white background; red line
* Haiku, new behaviour - transparent background; red line with
black pixels as artifacts of the anti-aliasing process.
The anti-aliasing artifacts, as PulkoMandy pointed out, are simply a
result of not using the B_OP_ALPHA and an appropriate blending mode,
and would happen on BeOS as well if the line had some transparency,
such as through anti-aliasing.
Change-Id: I09ac054eb0ce79e697b78ea48d1db4a15041e600
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7899
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
TCP times are measured in milliseconds, and so on LAN (or on two
VMs on the same host) we can wind up with round trip times of
less than 1 ms, which thus come out to 0. Tolerate this appropriately
rather than taking 0 to be a magic value meaning "unknown".
Change-Id: Ica827ee4ea353208291cf4348e9da8af6214b507
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7926
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
The basic idea: target a window size large enough to fit one
second's worth of data in it, using the round-trip time to
condition when we make the computations.
If we don't have SACK (to reduce retransmissions on packet loss)
or the user has specified a specific receive buffer size, then don't
scale at all.
Send window scaling isn't implemented yet, as that more-or-less
requires more careful management of congestion windows and SACK
processing which we do not currently implement.
Part of #15886.
Change-Id: Ia2480e6981324d2663e47cb17e8fc47ccc5f9aa0
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6364
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This is more important when window scaling is enabled as otherwise
we will send large amounts of window-update ACKs needlessly.
Ideally we would just use fReceiveWindow here, but due to a
TODO it stays constant (or increases only) at present, so we
have to compute the window size remainder inline. Another
similar computation elsewhere failed to take the case when
the window is 0 into account, so fix that too while at it.
Change-Id: Ibcca258472940d7de2d1adc9f986ddb7245438be
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7924
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
The network stack (TCP in particular) does not handle path MTU
discovery properly (or at all), so we should avoid trying to
send (or advertise support for) frames that large.
Now that we use net_buffers for receiving and sending directly,
this value really is only the "MTU"; it is entirely possible
to receive frames larger than this successfully. So this should
only fix things and not break anything at present.