This means that we won't try and change the commitment at all, and it
will be up to the caller to do that instead.
Also move the commitment change from the beginning to the end of Rebase,
matching Resize. This way, we won't trip the new asserts added to Commit()
in the previous commits.
Add a relevant assert to vm_try_reserve_memory to make sure the
negative priority doesn't end up down that far.
Overcommitted caches should only have commitments equal to the
number of pages they actually contain, so we should decommit
whenever pages are discarded.
This changes the API of VMCache::Discard to return an ssize_t
of the size of pages that were discarded (or a negative error on
failure.) Nothing checked the return value besides things in VMCache
itself, it appears; but it apparently never fails, so that's fine.
Also add asserts to Commit() that the new commitment at least
encompasses all pages the cache actually contains.
In copy_on_write_area, the copied cache should have the same overcommit
status as the original area, and in set_memory_protection, we shouldn't
change the committed size at all if the cache is overcommitting (otherwise,
we'd wind up shrinking cache's commit sizes below the actual number of
pages they contained in some cases.)
It seems this method was never renamed when MergeStore was renamed
to Merge all the way back in hrev27179. However, that wound up
working out, because this method also didn't call the base class
implementation that actually merges the page trees properly, so
it wouldn't have worked anyway.
Follow input_device_type above: we don't have _TYPE or _SUBTYPE on
the end, but _POINTING in the middle, because these aren't in a global
"subtype" enumeration, but a B_POINTING_DEVICE-specific enumeration.
Also don't bother adding the UNKNOWN type to messages that have no
type; if it's not included, UNKNOWN is implied. Saves a few CPU cycles.
Change-Id: I9088b9fcee63bf001b43febbe1e3ac17eb1792b4
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8635
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
"Move" now sounds like it has 'move' semantics (i.e. replaces this
structure's data with the other structure's data), while MoveFrom()
really had 'move+append' semantics (appends the other list's elements
to this list, and clears the other list.) To make this clearer, it's
here renamed to "TakeFrom".
This should reduce confusion with the other move-related APIs that
are starting to show up in the Haiku tree (e.g. "MoveFrom" in BRegion.)
Change-Id: Ib0a61a9c12fe8812020efd55a2a0818883883e2a
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8634
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Reviewed-by: X512 X512 <danger_mail@list.ru>
This is now necessary after enabling delayed commitments for anonymous
mappings with PROT_NONE.
Change-Id: I33b76f9d9f6a1d560793e523b74e9ac9fd7a4f62
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8676
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
- It is a dead code that was not enabled for a long time.
- Asynchrous back to front framebuffer copying breaks update session
logic and introduce flickering artefacts.
Change-Id: Ifefd711e8dcd900443ba976f5efe128744fef2ca
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8617
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
Reviewed-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
- It is not enabled for a long time and is actually a dead code.
- It was tested before that it is actually slower on < 15 year old
hardware so it have no any benefits. Modern CPUs have no problems
with simple memory filling/copying operations. More complex
acceleration operations are not supported in current accelerant driver
API.
- It breaks double buffering and reintroduce flickering artefacts.
- It is incompatible with antialiased CPU drawing because GPU
framebuffer memory reading is deadly slow and reading is required for
alpha blending operation. So rendering buffer must be in CPU memory,
offscreen GPU buffer can't be used.
- Hardware 2D acceleration for modern hardware is usually implemented
using generic GPU rendering APIs such as OpenGL or Vulkan.
Change-Id: Ifb93c80cca4fc5f072e3166b29fc63b643ddb437
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8616
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
These were declared in this header on BeOS, so we need to keep
them around for ABI compatibility, but they are nonstandard
and no other C library besides glibc appears to provide them
at all (not even musl, and none of the BSDs.)
Reduces the size of WiFi drivers by a bit (and reduces the number
of symbols the kernel has to resolve within the binaries.)
Tested with realtekwifi, still works.
This will signal the package state change to
"pending" more quickly when the package is
being installed.
Change-Id: Ic0bbb0dbbe938f73348cb184aa1c3b83db90acd5
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8588
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Andrew Lindesay <apl@lindesay.co.nz>
The POSIX specification says that the behavior of specifying O_TRUNC
with O_RDONLY is "undefined", but the Linux manpages ominously state
"On many systems the file is actually truncated." I tested this,
and indeed on Linux the file is actually truncated.
This doesn't seem like a very sensible behavior, so in this commit
it's changed to return B_NOT_ALLOWED (EPERM) if those flags are
specified together. The FAT driver already did this, but most other
filesystem drivers just checked write access permissions and
truncated the file anyway; so this is indeed a behavioral change.
Change-Id: If2e76782743ee91d934dc7e0c2f306f37b159a0f
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8625
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
And also return an error if the offset of an mmap() request isn't
page-aligned, rather than silently aligning it. libroot already
did this for mmap() itself, so this only affects things that invoke
the operation or syscall directly.
Fixes #19155.
Change-Id: I081dd1492d06f56536c1dbb5d4028345f95c4460
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8622
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Move one test from map_cut_tests, otherwise the other tests are new.
Includes a test for the cause of #19155.
Change-Id: I15abbf11f2c6db7385754825abbcc159414f6fd8
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8631
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
* Clarify the fResizeThreshold logic and remove the comment.
* Rename "count" constructor argument to "blockSize", as this is
what it actually does.
No functional change intended.
Change-Id: I993bf0e695f47da181e9fb50b9a964edfd4a0adc
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8629
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Breaks out some of the core data about a
package into a sub-model to later support
immutable models.
Change-Id: Ib75ba24c6848829c835199130fe58b0f2d6ebcde
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8587
Reviewed-by: Andrew Lindesay <apl@lindesay.co.nz>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
There's still a number of tests in the root that should be moved
to other subdirectories, but this at least gets the VM-related ones
into a subdirectory (and removes a stale entry in the VM Jamfile.)
This reverts commit 1db0961121.
It turns out the comment is not obsolete; what it refers to isn't
PAE systems but true 32-bit ones. I'm not sure we should use
64-bit cache offsets even there, but that's a decision for another
time.
We need at least 19 characters on 64-bit architectures:
2 for "0x", 16 for the pointer, and 1 for the \0. So just
use a round 20.
Fixes cut-off pointer values in the display.
That is, if we have a path like "/nonexistent-1/nonexistent-2/file",
we shouldn't report "nonexistent-1" as the "leaf" node, but rather
nothing at all. Otherwise, create_vnode() that expects the result to be a
directory vnode plus a nonexistent file will get confused and try
to create a file "/nonexistent-1" rather than just bailing out.
This fixes #19062. The reproducer in that ticket caused a scenario
much like the above; and since rootfs doesn't support regular files,
it returned "EINVAL" when attempting to create the "/nonexistent" file.
After this change, we get ENOENT as expected.
Change-Id: Ifcaaa858403fb747858800afbf644051bb9913ad
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8621
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This is an optimization in two ways: first, it allows us to avoid
unlocking the cache or needing the "unreferenced" store-ref acquisition
if we can write the next page in the same cache; and second, the
I/O will be much more tightly combined, as PageWriteTransfer will
be able to merge the iovecs more often and do less I/O (and on
spinning disks, we'll write adjacent regions more often, too.)
Based on some basic logging, this happens very often. I saw adjacent
writes numbers of e.g. 203, 255 (kNumPages), 139, 15, 99, etc. There
were a fair number of 0s, but that case shouldn't add too much overhead
since we bail out very rapidly.
In the case of things like "dd if=/dev/zero of=file ...", this is a
major optimization, since it massively reduces lock contention between
the dd thread and the page_writer thread.
A compile benchmark seems relatively similar, maybe slightly faster.
Otherwise, the FD won't be closed, and then the underlying vnode
won't ever be released, leading to files whose space can't be reclaimed
without rebooting and running checkfs.
* Do not call entry_cache_add_missing from the FAT driver, because it
can lead the VFS to believe a filename is missing when it is
actually present (in a different case).
* Remove CopyFile code that was added to handle a race condition when
dragging multiple files to a FAT volume. The race condition only
occurred in the first place because of the above driver bug.
* Ensure the FAT driver can fail gracefully if dosfs_read_vnode is
called with an inode number that is not present in the FAT vcache.
Without any 'missing' entries in the entry cache, there is an
increased chance that multiple (non-missing) entries representing
the same file will be added to the entry cache, which can result in
the VFS calling the FS get_vnode hook on a file after it has been
unlinked.
* Follows up on https://review.haiku-os.org/c/haiku/+/7623.
Change-Id: I5667119d8149954e0c8a5829617a7d93a6fc7aae
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8595
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
* The Be Book specifies that events will be flushed on destruction.
The previous implementation of this class didn't do that, but
we ought to. Unless the queued events have attached buffers or
a cleanup hook, this is a no-op anyway.
* Use a mutex rather than a BLocker for the allocation lock. This
saves a semaphore per queue.
* Put the queue_entry in an anonymous namespace. It shouldn't have
any global symbols anyway, but it doesn't hurt.
* fix #18962
* bus->queue_count wasn't initialized yet so the check could succeed
or not, depending on the init value for memory: on release, 0 mostly,
on nightly, a non-zero value.
Change-Id: Id745932e8171abe3b8b78a3e9b2f2058c9507f7a
Reviewed-on: https://review.haiku-os.org/c/haiku/+/8618
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Haiku-Format: Haiku-format Bot <no-reply+haikuformatbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>