This reverts a8877df135.
Previously, the "unmergeable" flag was necessary for the RAMFS,
because if the last vnode reference was released while there
was still a consumer (as the old ordering of _RemoveConsumer
had), then the release of the cache reference when the vnode
was removed would result in the cache trying to merge with
its now-only consumer and sole referrer.
Now, instead, we remove the consumer before releasing the store
reference, so that there's no chance the cache will be merged
inside this method.
mmap_cut_tests still pass, web browsers using ramfs shared_memory
still seem to work.
This way, if the Resize() is supposed to take care of the commitment,
it will (and will fail early if it can't), while if we are the ones
responsible for adjusting the commitment, map_backing_store won't
commit at all (avoiding committing far more than will be necessary),
and we can just steal the commitment from the first cache for the second.
This reverts 3a81e9446d (2022).
That commit fixed #17556 by just checking if the area had an
underlying cache that wasn't a RAM cache. But there are cases
where there will be RAM source caches that we have to take
into account, too, not just vnode caches or the like. The
most common example of that would be all areas of a team
after a fork(); the original pages will be in a read-only
source cache.
This commit fixes the real underlying problem: if the first area
has a source cache, then the new second cache needs to have that
as its source, too; and furthermore must have the correct offsets
in order to access its pages correctly.
The test for #17556 that was added in 9ed77019b6
still works as before, as do all the applications I tested that
use cut_area. Some assertion failures that the cut tests triggered
(related to commitment sizes) are fixed by this, as well.
This also seems to fix the remaining instability on fork() in the
boehm-gc's "gctest".
The page_protections aren't changed at all, so all pages that exist
should already have the same protections as are specified in the array.
The only thing different is what cache and area they now belong to, but
the VMTranslationMap does not care about that.
So we don't need to loop over the pages and re-protect them in this case.
We already didn't for all cases where no page_protections were involved.
(It seems this logic was introduced in bdcc293fa8
along with general page_protections support in cut_area.)
When the area has no page_protections but isn't writable,
we also want to use a smaller-than-default commitment.
So, adjust compute_area_page_commitment to handle that case,
and then use it in cut_area where appropriate.
cache->virtual_base is the cache's start address, which no pages will
be found before. area->cache_offset on the other hand is the area's
offset into the cache (i.e. offset 0 in the area will be offset
0 + area->cache_offset in the cache.) These addresses may well be
the same (even if they're not 0), and in many situations they are,
but in situations with shared or cut areas, they may not be.
The only thing that uses this method is madvise(MADV_FREE), which
probably not many things besides the guarded_heap use at present.
We shouldn't return B_OK here, because then the page writer will
assume it's acquired a store ref and can write pages from this
cache, when of course it's done nothing of the sort.
Previously, we'd wind up adding pages from the source to the consumer
that were potentially or actually outside the consumer's bounds.
Now we check the consumer's size and ignore any pages that we don't
want or need; they'll just be freed along with the source cache.
While at it, drop VMAnonymousCache::_MergePagesSmallerSource; it
was the same as the base class's implementation of Merge preceding
this commit; and add a comment to _MergePagesSmallerConsumer noting
that some of the pages may be busy (indeed, I manage to trigger an
assert related to copy-on-write in here at least once.)
I discovered this problem because the page commitment size ASSERT()s
triggered inside Resize() and Rebase(); but the out-of-range pages
already existed in the cache before those functions were called. So,
I've also added an ASSERT to MovePage() that would have caught this
problem more directly.
The original meaning of vfork is "fork, sharing virtual memory" (until
exec). We don't implement that, and may never do so. However, since
calling any functions besides exec() in a vfork'ed child is "undefined
behavior", we can take advantage of that fact at least by not calling
any of the pre- and post-fork hooks, saving a lot of page faults from
copy-on-write.
On one run of the "compile HaikuDepot and the mime_db" benchmark with -j4,
the total waits count on the top two VMCaches by contention dropped
from 62125 and 58927, to 52034 and 41225.
musl apparently does more or less this same thing (vfork() is fork()
but without calling any of the hooks.)
The first was added in 2004 when there wasn't even a branch for
initializing the child; I think this can be considered done now.
The second was added in 2010, but it seems in the meantime we've
decided that reinitializing locks is the best way to make them
consistent after calling fork(), so it's also obsolete.
Otherwise, platform loaders couldn't make heap allocations inside
platform_start_kernel(), which some loaders (e.g. EFI) do.
Implement calling heap_release() for the BIOS loaders at least.
This gets us back the ~1.5MB of bootloader heap memory there.
Only show ratings UI elements when
the package could be rated
Change-Id: I4f464e2cb21f927186c0ffdddbc5c11498ffed31
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8678
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Rather than setting it from the total count of pages, and then reducing
it by the size of the B_ALREADY_WIRED areas incrementally. This means
that other things allocated in the early boot period (like page tables)
will also be accounted for. The downside is that, if they don't have
a corresponding area, then any pages freed later on won't also unreserve
memory at present; but the early boot page tables likely won't be freed
at all (since they'll be in use; or should have already been freed in the
case of the 32-bit to PAE transition.)
(In the future, we should reserve memory as well as pages for the page
tables, and that will take care of that problem anyway.)
Booting x86_64 in QEMU with 1GB of RAM, the old accounting method produced
an initial (after ALREADY_WIRED accounting) sAvailableMemory of 251,368
total pages, while this new accounting method gives 250,812 instead,
a difference of 556 pages. (Some of that is probably the never-freed
bootloader memory, which I think is around ~360 pages.)
Overall this should reduce the amount of "theoretically available but
actually inaccessible" memory, which should hopefully help with the VM
getting itself into trouble thinking memory is available when it
really isn't.
Fixes the code I introduced in hrev50114 for custom serial port
baudrates. The idea there was based on FreeBSD implementation, but I
missed a key detail: speed_t in BeOS (and Haiku) is only an 8 bit value.
Note that BeOS does not have c_ispeed and c_ospeed fields, instead they
are named c_ixxxxx and c_oxxxxx with a comment in termios.h saying that
they are not used. So the renaming and moving of these fields isn't a problem.
This means the previous code worked only for speed between 20 and 255
baud, quite the opposite of what I wanted to do, which is to enable
access to fast baudrates.
This new implementation exploits the fact that tcflag_t is 32 bit, but
we never actually use more than 16 bits. Therefore, the high bits of
each value were unused, and can be reclaimed to store the speed,
by changing tcflag_t to 16 bits. The speed is then inserted as two 16
bit values that can be combined as a 32 bit one. The flag bits are not
moved (on little endian systems), and the extra values are guaranteed to
be set to 0 by any previous code that was compiled with 32 bit tcflag_t.
Support for different speeds for input and output is now also possible
(POSIX specifies separate functions for setting the input and output
speeds, which is useful for some old terminals and modems, where it was
useful to have a high baudrate for data to display on the screen, but
things typed on the keyboard aren't quite as fast). If desired, we could
now properly implement this in our serial drivers, but it isn't done
here yet.
Additional changes:
- speed_t is now a 32bit type, allowing to pass large values to
cfset(i,o)speed
- fix some places where a baudrate enum value was incorrectly put in the
c_ispeed and c_ospeed fields, this is not how they were meant to be
used (it meant the default was to use a speed of 0, that means "hangup"
the line, which I think no serial driver really implemented).
- do not put baudrate enumeration values in c_iflag and c_oflag, they
are meant to be used in c_cflag only, and conflict with other bits.
Separate speeds for input and output can be done by setting the
c_cflag value to CBAUD (indicating custom baudrates) and then setting
the values in c_ispeed and c_ospeed.
Fixes #18483
Change-Id: If63a24b5ced5edf6d051d921197db194def0c614
Reviewed-on: https://review.haiku-os.org/c/haiku/+/7068
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
See inline comment: we can potentially wind up with conflicting mappings,
depending on what the system ACPI firmware tells ACPICA to do, so it's
best if we avoid using non-default types on architectures where they
aren't strictly necessary.
Fixes #19119 and related issues.
Otherwise we may fault later but have no memory to satisfy the fault.
For a compile of HaikuDepot and the mime_db in VMware with -j4, this
seems to increase the wait time on the "available memory" lock from
~0.1s to ~0.5s, and the wait count from ~500 to ~1500 (overall real time
~30s.) Probably we can mitigate that later by doing atomic updates on
sAvailableMemory, at least for releasing memory.
Change-Id: I61abc28d1fc30f7b3d5fd9a2e68e4f4ec960f88d
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8677
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
* Fix a silly bug in compute_area_page_commitment that was leading to
the cache's pages not being taken into account at all.
* Don't let VMCache::Resize() and Rebase() alter the commitments,
but rather let us do that. Add assertions that they did not fail.
* Move area->cache_offset increment up so that compute_area_page_commitment
can use it as it needs to.
Fixes assertion failures from boehm-gc tests following the previous commits.
This means that we won't try and change the commitment at all, and it
will be up to the caller to do that instead.
Also move the commitment change from the beginning to the end of Rebase,
matching Resize. This way, we won't trip the new asserts added to Commit()
in the previous commits.
Add a relevant assert to vm_try_reserve_memory to make sure the
negative priority doesn't end up down that far.
Overcommitted caches should only have commitments equal to the
number of pages they actually contain, so we should decommit
whenever pages are discarded.
This changes the API of VMCache::Discard to return an ssize_t
of the size of pages that were discarded (or a negative error on
failure.) Nothing checked the return value besides things in VMCache
itself, it appears; but it apparently never fails, so that's fine.
Also add asserts to Commit() that the new commitment at least
encompasses all pages the cache actually contains.
In copy_on_write_area, the copied cache should have the same overcommit
status as the original area, and in set_memory_protection, we shouldn't
change the committed size at all if the cache is overcommitting (otherwise,
we'd wind up shrinking cache's commit sizes below the actual number of
pages they contained in some cases.)
It seems this method was never renamed when MergeStore was renamed
to Merge all the way back in hrev27179. However, that wound up
working out, because this method also didn't call the base class
implementation that actually merges the page trees properly, so
it wouldn't have worked anyway.
Follow input_device_type above: we don't have _TYPE or _SUBTYPE on
the end, but _POINTING in the middle, because these aren't in a global
"subtype" enumeration, but a B_POINTING_DEVICE-specific enumeration.
Also don't bother adding the UNKNOWN type to messages that have no
type; if it's not included, UNKNOWN is implied. Saves a few CPU cycles.
Change-Id: I9088b9fcee63bf001b43febbe1e3ac17eb1792b4
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8635
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
"Move" now sounds like it has 'move' semantics (i.e. replaces this
structure's data with the other structure's data), while MoveFrom()
really had 'move+append' semantics (appends the other list's elements
to this list, and clears the other list.) To make this clearer, it's
here renamed to "TakeFrom".
This should reduce confusion with the other move-related APIs that
are starting to show up in the Haiku tree (e.g. "MoveFrom" in BRegion.)
Change-Id: Ib0a61a9c12fe8812020efd55a2a0818883883e2a
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8634
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Reviewed-by: X512 X512 <danger_mail@list.ru>
This is now necessary after enabling delayed commitments for anonymous
mappings with PROT_NONE.
Change-Id: I33b76f9d9f6a1d560793e523b74e9ac9fd7a4f62
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8676
Reviewed-by: waddlesplash <waddlesplash@gmail.com>