We shouldn't return B_OK here, because then the page writer will
assume it's acquired a store ref and can write pages from this
cache, when of course it's done nothing of the sort.
Previously, we'd wind up adding pages from the source to the consumer
that were potentially or actually outside the consumer's bounds.
Now we check the consumer's size and ignore any pages that we don't
want or need; they'll just be freed along with the source cache.
While at it, drop VMAnonymousCache::_MergePagesSmallerSource; it
was the same as the base class's implementation of Merge preceding
this commit; and add a comment to _MergePagesSmallerConsumer noting
that some of the pages may be busy (indeed, I manage to trigger an
assert related to copy-on-write in here at least once.)
I discovered this problem because the page commitment size ASSERT()s
triggered inside Resize() and Rebase(); but the out-of-range pages
already existed in the cache before those functions were called. So,
I've also added an ASSERT to MovePage() that would have caught this
problem more directly.
The original meaning of vfork is "fork, sharing virtual memory" (until
exec). We don't implement that, and may never do so. However, since
calling any functions besides exec() in a vfork'ed child is "undefined
behavior", we can take advantage of that fact at least by not calling
any of the pre- and post-fork hooks, saving a lot of page faults from
copy-on-write.
On one run of the "compile HaikuDepot and the mime_db" benchmark with -j4,
the total waits count on the top two VMCaches by contention dropped
from 62125 and 58927, to 52034 and 41225.
musl apparently does more or less this same thing (vfork() is fork()
but without calling any of the hooks.)
The first was added in 2004 when there wasn't even a branch for
initializing the child; I think this can be considered done now.
The second was added in 2010, but it seems in the meantime we've
decided that reinitializing locks is the best way to make them
consistent after calling fork(), so it's also obsolete.
Otherwise, platform loaders couldn't make heap allocations inside
platform_start_kernel(), which some loaders (e.g. EFI) do.
Implement calling heap_release() for the BIOS loaders at least.
This gets us back the ~1.5MB of bootloader heap memory there.
Only show ratings UI elements when
the package could be rated
Change-Id: I4f464e2cb21f927186c0ffdddbc5c11498ffed31
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8678
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Rather than setting it from the total count of pages, and then reducing
it by the size of the B_ALREADY_WIRED areas incrementally. This means
that other things allocated in the early boot period (like page tables)
will also be accounted for. The downside is that, if they don't have
a corresponding area, then any pages freed later on won't also unreserve
memory at present; but the early boot page tables likely won't be freed
at all (since they'll be in use; or should have already been freed in the
case of the 32-bit to PAE transition.)
(In the future, we should reserve memory as well as pages for the page
tables, and that will take care of that problem anyway.)
Booting x86_64 in QEMU with 1GB of RAM, the old accounting method produced
an initial (after ALREADY_WIRED accounting) sAvailableMemory of 251,368
total pages, while this new accounting method gives 250,812 instead,
a difference of 556 pages. (Some of that is probably the never-freed
bootloader memory, which I think is around ~360 pages.)
Overall this should reduce the amount of "theoretically available but
actually inaccessible" memory, which should hopefully help with the VM
getting itself into trouble thinking memory is available when it
really isn't.
Fixes the code I introduced in hrev50114 for custom serial port
baudrates. The idea there was based on FreeBSD implementation, but I
missed a key detail: speed_t in BeOS (and Haiku) is only an 8 bit value.
Note that BeOS does not have c_ispeed and c_ospeed fields, instead they
are named c_ixxxxx and c_oxxxxx with a comment in termios.h saying that
they are not used. So the renaming and moving of these fields isn't a problem.
This means the previous code worked only for speed between 20 and 255
baud, quite the opposite of what I wanted to do, which is to enable
access to fast baudrates.
This new implementation exploits the fact that tcflag_t is 32 bit, but
we never actually use more than 16 bits. Therefore, the high bits of
each value were unused, and can be reclaimed to store the speed,
by changing tcflag_t to 16 bits. The speed is then inserted as two 16
bit values that can be combined as a 32 bit one. The flag bits are not
moved (on little endian systems), and the extra values are guaranteed to
be set to 0 by any previous code that was compiled with 32 bit tcflag_t.
Support for different speeds for input and output is now also possible
(POSIX specifies separate functions for setting the input and output
speeds, which is useful for some old terminals and modems, where it was
useful to have a high baudrate for data to display on the screen, but
things typed on the keyboard aren't quite as fast). If desired, we could
now properly implement this in our serial drivers, but it isn't done
here yet.
Additional changes:
- speed_t is now a 32bit type, allowing to pass large values to
cfset(i,o)speed
- fix some places where a baudrate enum value was incorrectly put in the
c_ispeed and c_ospeed fields, this is not how they were meant to be
used (it meant the default was to use a speed of 0, that means "hangup"
the line, which I think no serial driver really implemented).
- do not put baudrate enumeration values in c_iflag and c_oflag, they
are meant to be used in c_cflag only, and conflict with other bits.
Separate speeds for input and output can be done by setting the
c_cflag value to CBAUD (indicating custom baudrates) and then setting
the values in c_ispeed and c_ospeed.
Fixes #18483
Change-Id: If63a24b5ced5edf6d051d921197db194def0c614
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7068
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
See inline comment: we can potentially wind up with conflicting mappings,
depending on what the system ACPI firmware tells ACPICA to do, so it's
best if we avoid using non-default types on architectures where they
aren't strictly necessary.
Fixes #19119 and related issues.
Otherwise we may fault later but have no memory to satisfy the fault.
For a compile of HaikuDepot and the mime_db in VMware with -j4, this
seems to increase the wait time on the "available memory" lock from
~0.1s to ~0.5s, and the wait count from ~500 to ~1500 (overall real time
~30s.) Probably we can mitigate that later by doing atomic updates on
sAvailableMemory, at least for releasing memory.
Change-Id: I61abc28d1fc30f7b3d5fd9a2e68e4f4ec960f88d
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8677
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
* Fix a silly bug in compute_area_page_commitment that was leading to
the cache's pages not being taken into account at all.
* Don't let VMCache::Resize() and Rebase() alter the commitments,
but rather let us do that. Add assertions that they did not fail.
* Move area->cache_offset increment up so that compute_area_page_commitment
can use it as it needs to.
Fixes assertion failures from boehm-gc tests following the previous commits.
This means that we won't try and change the commitment at all, and it
will be up to the caller to do that instead.
Also move the commitment change from the beginning to the end of Rebase,
matching Resize. This way, we won't trip the new asserts added to Commit()
in the previous commits.
Add a relevant assert to vm_try_reserve_memory to make sure the
negative priority doesn't end up down that far.
Overcommitted caches should only have commitments equal to the
number of pages they actually contain, so we should decommit
whenever pages are discarded.
This changes the API of VMCache::Discard to return an ssize_t
of the size of pages that were discarded (or a negative error on
failure.) Nothing checked the return value besides things in VMCache
itself, it appears; but it apparently never fails, so that's fine.
Also add asserts to Commit() that the new commitment at least
encompasses all pages the cache actually contains.
In copy_on_write_area, the copied cache should have the same overcommit
status as the original area, and in set_memory_protection, we shouldn't
change the committed size at all if the cache is overcommitting (otherwise,
we'd wind up shrinking cache's commit sizes below the actual number of
pages they contained in some cases.)
It seems this method was never renamed when MergeStore was renamed
to Merge all the way back in hrev27179. However, that wound up
working out, because this method also didn't call the base class
implementation that actually merges the page trees properly, so
it wouldn't have worked anyway.
Follow input_device_type above: we don't have _TYPE or _SUBTYPE on
the end, but _POINTING in the middle, because these aren't in a global
"subtype" enumeration, but a B_POINTING_DEVICE-specific enumeration.
Also don't bother adding the UNKNOWN type to messages that have no
type; if it's not included, UNKNOWN is implied. Saves a few CPU cycles.
Change-Id: I9088b9fcee63bf001b43febbe1e3ac17eb1792b4
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8635
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
"Move" now sounds like it has 'move' semantics (i.e. replaces this
structure's data with the other structure's data), while MoveFrom()
really had 'move+append' semantics (appends the other list's elements
to this list, and clears the other list.) To make this clearer, it's
here renamed to "TakeFrom".
This should reduce confusion with the other move-related APIs that
are starting to show up in the Haiku tree (e.g. "MoveFrom" in BRegion.)
Change-Id: Ib0a61a9c12fe8812020efd55a2a0818883883e2a
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8634
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Reviewed-by: X512 X512 <danger_mail@list.ru>
This is now necessary after enabling delayed commitments for anonymous
mappings with PROT_NONE.
Change-Id: I33b76f9d9f6a1d560793e523b74e9ac9fd7a4f62
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8676
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
- It is a dead code that was not enabled for a long time.
- Asynchrous back to front framebuffer copying breaks update session
logic and introduce flickering artefacts.
Change-Id: Ifefd711e8dcd900443ba976f5efe128744fef2ca
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8617
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
Reviewed-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
- It is not enabled for a long time and is actually a dead code.
- It was tested before that it is actually slower on < 15 year old
hardware so it have no any benefits. Modern CPUs have no problems
with simple memory filling/copying operations. More complex
acceleration operations are not supported in current accelerant driver
API.
- It breaks double buffering and reintroduce flickering artefacts.
- It is incompatible with antialiased CPU drawing because GPU
framebuffer memory reading is deadly slow and reading is required for
alpha blending operation. So rendering buffer must be in CPU memory,
offscreen GPU buffer can't be used.
- Hardware 2D acceleration for modern hardware is usually implemented
using generic GPU rendering APIs such as OpenGL or Vulkan.
Change-Id: Ifb93c80cca4fc5f072e3166b29fc63b643ddb437
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8616
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
These were declared in this header on BeOS, so we need to keep
them around for ABI compatibility, but they are nonstandard
and no other C library besides glibc appears to provide them
at all (not even musl, and none of the BSDs.)
Reduces the size of WiFi drivers by a bit (and reduces the number
of symbols the kernel has to resolve within the binaries.)
Tested with realtekwifi, still works.
This will signal the package state change to
"pending" more quickly when the package is
being installed.
Change-Id: Ic0bbb0dbbe938f73348cb184aa1c3b83db90acd5
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8588
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Andrew Lindesay <apl@lindesay.co.nz>