Fixes some PTE concurrent access bugs.
Change-Id: I09ec56861fae389a8a3e228b17a3921b85202c8b
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6949
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
* Replace count_low/count_high with bigtime_t fields plus an int32.
sizeof(spinlock) is now 32 bytes with the debug option enabled.
* Adjust and clean up all spinlock code to use the new fields.
* Fold DEBUG_SPINLOCK_LATENCIES into the new code. Remove the bootloader
option and other flags for it (these were not compiled in by default.)
The new code should be much easier to understand and also more powerful.
However, the information transmitted to userland isn't as useful now;
the KDL command output will have the interesting information.
(Things could be reworked to transmit more interesting information to
userland again if desired, but as this code clearly hadn't been compiled
for many years, as it referred to global spinlocks that have been gone
for a very long time.)
Change-Id: I2cb34078bfdc7604f288a297b6cd1aa7ff9cc512
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6943
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
on x86_64 implemented with rdtscp or rdpid, generically with a syscall.
Change-Id: I8f776848bf35575abec8a8c612c4a25d8550daea
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6866
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Based on hamishm's original patch from 2015, but heavily modified,
refactored, and reworked.
From the original commit message:
> When an object is deleted, a B_EVENT_INVALID event is delivered,
> and the object is unregistered from the queue.
>
> The special event flag B_EVENT_ONE_SHOT can be passed in when adding
> an object so that the object is automatically unregistered when an
> event is delivered.
Modifications to the original change include:
* Removed the public interface (syscalls remain private for the moment)
* Event list queueing/dequeueing almost entirely rewritten, including:
- Clear events field when dequeueing.
- Have B_EVENT_QUEUED actually indicate whether the event has been
appended to the linked list (or not), based around lock state.
The previous logic was prone to races and double-insertions.
- "Modify" is now just "Deselect + Select" performed at once;
previously it could cause use-after-frees.
- Unlock for deselect only once at the end of dequeue.
- Handle INVALID events still in the queue upon destruction,
fixing memory leaks.
* Deduplified code with wait_for_objects.
* Use of C++ virtual dispatch instead of C-style enum + function calls,
and BReferenceable plus destructors for teardown.
* Removed select/modify/delete flags. Select/Modify are now the same
operation on the syscall interface, and "Delete" is done when 0
is passed for "events". Additionally, the events selected can be fetched
by passing -1 for "events".
* Implemented level-triggered mode.
* Use of BStackOrHeapArray and other convenience routines in syscalls.
Change-Id: I1d2f094fd981c95215a59adbc087523c7bbbe40b
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6745
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
ASID allocation is not supported yet, so always use ASID 0 for user pages for now.
Change-Id: I021e77dae692c22984bc625dd0588362bece45b7
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6698
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
This requires the introduction of the flag B_USER_MUTEX_SHARED, and then
actually using the SHARED flags in pthread structures to determine when
it should be passed through.
This commit still uses wired memory even for per-team contexts.
That will change in the next commit.
GLTeapot FPS seems about the same.
Change-Id: I749a00dcea1531e113a65299b6d6610f57511fcc
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6602
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
* Introduce SWDBM flag similarly to the arm64 port
* Reuse TEX[2] for SWDBM flag which should be availble
to be used by the operating system if TEX remap
is enabled.
* Introduce SetAndClearPageTableEntryFlags for updating
accessed and modified flags atomically
* Startup sequence is handled similarly to accessed flag, i.e.
set Modified flag in initially mapped pages in bootloader and early map.
* Once the kernel initialization has progressed enough,
pages are mapped as read-only and modified flag handling is done
in the page fault handler.
Change-Id: I8f761e2c6325d1b91481abd569d5e8befded0761
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6518
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Suppose the following scenario:
1. Thread A holds a mutex.
2. Thread B goes to acquire the mutex, winds up in kernel waiting.
3. Thread A unlocks; first unsets the LOCKED flag.
As WAITING is set, it calls the kernel; but instead of processing
this immediately, the thread is suspended for any reason (locks,
reschedule, etc.)
4. Thread B hits a timeout, or a signal. It then unblocks in the kernel,
which causes the WAITING flag to be unset.
5. Thread C goes to acquire the lock. It sets the LOCKED flag.
It sees the WAITING flag is not set, so it returns at once,
having successfully acquired the lock.
6. Thread A, suspended back in step 3, resumes.
Now we encounter the problem. Under the previous code, the following
would occur.
7. Thread A sees that no threads are waiting. It thus unsets the LOCKED
flag, and returns from the kernel. Now we have a mutex theoretically
held by thread C but which (illegally) has no LOCKED flag set!
8. Some other thread tries to acquire the lock, and succeeds, for LOCKED
is not set. We now have one lock owned by two separate threads.
That's very bad!
The solution, in this commit, is to (1) switch from using "atomic_or"
to lock mutexes, to using "atomic_test_and_set", and (2) mandate that
_kern_unblock_mutex must be invoked with the mutex already unlocked.
Trying to solve the problem with (2) but without (1) produces other
complications and would overall be more complicated. For instance,
all existing userland code expected that it would set LOCKED, but then
check LOCKED|WAITING. If _kern_mutex_unlock does not unset LOCKED,
then whichever thread sets LOCKED when it was previously unset is
now the mutex's undisputed owner, and if it fails to notice this,
would deadlock.
That could have been solved with extra checks at all lock points, but
then that would mean locks would not be acquired "fairly": it would
be possible for any thread to race with an unlocking thread, and
acquire the lock before the kernel had a chance to wake anyone up.
Given how fast atomics can be, and how slow invoking the kernel is
comparatively, that would probably make our mutexes extremely "unfair."
This would not violate the POSIX specification, but it does seem like
a dangerous choice to make in implementing these APIs.
Linux's "futex" API, which our API bears some similarities to, requires
at least one atomic test-and-set for an uncontended acquisition,
and multiple atomics more for even the simplest case of contended
acquisition. If it works for them, it should work for us, too.
Fixes #18436.
Change-Id: Ib8c28acf04ce03234fe738e41aa0969ca1917540
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6537
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Pages should not be marked as accessed when initially mapping them.
However, there's a short interval during kernel startup when new pages
are mapped but the fault handler is not installed yet.
Therefore, we set Accessed Flag to 1 in early_map.
Once the kernel initialization has progressed enough, we start mapping
new pages with Accessed Flag set to 0.
The chicken and egg problem of initially mapping the vector page is
tackled by preallocating the vector page in the boot loader.
Change-Id: Ie3be4f81812d7a090af57e8c79420598d16182b9
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6450
Reviewed-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
- Stored the additional start time of each team, expressed by
milliseconds since boot.
- Added more fields to the `team_info` structure. These field
include those provided by the `get_extended_team_info` syscall as
well as the newly introduced `start_time`.
- Extended the `_kern_get_team_info` system call to receive an
additional `size_t` argument. If this size is smaller than or
equal to the size of the old `team_info` structure, the newly
added attributes will not be retrieved.
Change-Id: I22ee6b91ad2ee3b66a7f770036c79a718c5f115c
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6390
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Jessica Hamilton <jessica.l.hamilton@gmail.com>
THREAD_BLOCK_TYPE_OTHER implies the "object" pointer in the
wait information is a string. But sometimes we want to pass
through objects which are not strings, for inspection in KDL.
only scsi_disk checks the actual value, other drivers take the logical block size.
This change reports the physical block size from the disk rather than the block
size used by IDE/SATA/SCSI commands. On typical modern SATA disks, the SATA
commands will use 512 byte blocks, but the disk will actually read and write
4K blocks internally. This is only of importance for partition alignment for DriveSetup,
and is independant of file systems or partitioning systems. This could also influence
the recommended block size for some file systems.
Change-Id: Id0f2e22659e89fcef64c1f8d04f81cd68995e01f
Reviewed-on: https://review.haiku-os.org/c/haiku/+/5667
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Implemented the missing POSIX functions in <locale.h>:
newlocale, duplocale, uselocale, and freelocale, and also
provided missing type definitions for <locale.h>.
Implemented missing POSIX locale-based function variants.
Modified LocaleBackend so that it could support thread-local
locales.
Some glibc-like locale-related variables supporting
ctype and printf family of functions have also been updated
to reflect the thread-local variables present in the latest
glibc sources.
As there have been some modifications to global symbols
in libroot, libroot_stubs.c has been regenerated.
Bug: #17168
Change-Id: Ibf296c58c47d42d1d1dfb2ce64042442f2679431
Reviewed-on: https://review.haiku-os.org/c/haiku/+/5351
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
* This will be needed for the following commit that implements
`pthread_tryjoin_np` and `pthread_timedjoin_np`.
Change-Id: Idccb1aa588d6d10825294d14925d9bd046b65f19
Reviewed-on: https://review.haiku-os.org/c/haiku/+/5098
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
* Resolves an issue compiling icu70
* FreeBSD is 262,144
* Linux is 2,097,152
* Haiku was 131,072
This roughly doubles the maximum args length, and makes us
function inline with FreeBSD today. If we're the shortest
straw, we're going to find a lot of things broken (such
as ICU 70.1) Matching FreeBSD means any limitations we see
will also be seen on FreeBSD, making fewer "Haiku issues".
Change-Id: I677c0523a2f27c9e9901fda4180445bcb6da31b2
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4991
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
* Working under qemu smp 1,2+
* Working on SiFive Unmatched
* x86_64 efi not broken by smp_boot_other_cpus change
Change-Id: I32ebc17913e46ed082be9ade8f56448bbf12f16e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4705
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
This file should ideally contain only those things needed
across all system headers, even POSIX ones, and all other
declarations (B_* ones especially) should go in SupportDefs.h.
However, as nothing but riscv64 uses this right now, I've just
moved it to there.
Set first stack frame return address to
<commpage>commpage_thread_exit, so it will be called
when thread entry point returns.
Change-Id: Ide5cde8d4501eb7241e03ff4052174e984e78870
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4493
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
These were used in function_remapper.cpp but can be used elsewhere too,
so move them to a private header. Also use them for the stack protector
hidden function definition (probably not so useful since gcc2 doesn't
support using the stack protector anyway?).
The gcc2 way to make a symbol hidden is to manually generate the .hidden
directive in the assembler output. This is not perfect: it is hard to
use for C++ functions and methods (manual mangling of the name is
needed), and inline assembler can only be inserted inside functions. But
the alternative is patching gcc2 to add support for the function
attribute, and I don't want to dig into that today.
la57 kernel support is required. we simply add a 5th level and enable the cr4
feature. the safemode option "256tb_memory_limit" is named after the 4gb one,
but the current support is limited to 512GB as before (this can be later extended).
Change-Id: I922774473c4a6112a0e4ff74162285ad58aa53af
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3552
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
This reverts parts of hrev52546 that removed the B_KERNEL_AREA
protection flag and replaced it with an address space comparison.
Checking for areas in the kernel address space inside a user address
space does not work, as areas can only ever belong to one address space.
This rendered these checks ineffective and allowed to unmap, delete or
resize kernel managed areas from their respective userland teams.
That protection was meant to be applied to the team user data area which
was introduced to reduce the kernel to userland overhead by directly
sharing some data between the two. It was intended to be set up in such
a manner that this is safe on the kernel side and the B_KERNEL_AREA flag
was introduced specifically for this purpose.
Incidentally the actual application of the B_KERNEL_AREA flag on the
team user data area was apparently forgotten in the original commit.
The absence of that protection allowed applications to induce KDLs by
modifying the user area and generating a signal for example.
This change restores the B_KERNEL_AREA flag and also applies it to the
team user data area.
Change-Id: I993bb1cf7c6ae10085100db7df7cc23fe66f4edd
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2836
Reviewed-by: waddlesplash <waddlesplash@gmail.com>