This fixes a maintainance problem where you have to update this otherwise
unrelated file to keep it in sync whenever you add a color constant.
I've added a B_COLOR_WHICH_COUNT constant to the color_which enum which should
be updated to point to the newest color constants as new ones are added. I
reworked ServerReadOnlyMemory to use this constant instead of using to the
current largest color constant directly. If you use B_COLOR_WHICH_COUNT to
refer to a color in your code expect to get unpredictable and nonsensical
results. Most likely you'll get an undefined result which will return black
but don't depend on it.
The net effect of this is that ServerReadOnlyMemory doesn't need to be updated
anymore when new color constants are introduced but will continue to produce
correct results.
Eliminate kNumColors constant, replace it with B_COLOR_WHICH_COUNT
This allows you to change the scrollbar thumb color in Appearance preferences.
The default color is 216, 216, 216 so the scroll bar thumb looks the same by
default. Perhaps someday this can be updated to something a bit more colorful.
* The package directory path specified when mounting must be interpreted
in the caller's I/O context. We did always interpret it in the
kernel's I/O context, so that a relative path or a path in a chroot
environment wouldn't work correctly. Now opening the directory is done
in PackageDomain::Init() where we can at least guess the path's
origin.
* We also normalize the path, though merely to get more conclusive debug
output, since after opening the directory the path isn't used for
anything else anymore.
* Make the "packages" mount parameter optional. If not specified, use
the "packages" folder at our mount point.
B_ALREADY_WIRED, which was erroneously passed for the area protection
parameter to map_backing_store(), has the value 7 which implies user
readable and writable. Hence the address ranges around 0xdeadbeef and
0xcccccccc could actually be read and written from anywhere.
The tree comparisons didn't allow for different Nodes having the same
attribute value. Therefore only the first node would be added and later
we would try to remove a node not actually in the tree, leading to a
crash.
packagefs seems to finally unmount cleanly, now.
* Volume::_RemovePackageLinksDirectory(): We don't want to call
_RemovePackageLinksNode(). Besides that the ID hash table has already
been emptied before and we would thus release a reference to the node
erroneously, the method doesn't do anything we want anyway. We don't
want any children to be removed, since we have unregistered the
volume's packages already (which removes the respective package link
directories); the remaining ones are from other volumes and not ours
to remove.
* PackageFSRoot: Release a reference to the package links directory in
the destructor. We create the directory in Init() after all and no one
else takes over ownership.
Since we publish shine-through directories directly to the VFS, we need
to acquire an additional reference, because we release a reference when
a node is put.
PackageLinkDirectory::NotifyDirectoryAdded() first notified the listener
(Volume) about the directory itself, then about the links it contained.
Since Volume adds the nodes recursively, the latter were added twice,
resulting in a corrupted ID hash table.
On some 64 bit architectures program and library images have to be mapped in
the lower 2 GB of the address space (due to instruction pointer relative
addressing). Address specification B_RANDOMIZED_IMAGE_ADDRESS ensures that
created area satisfies that requirement.
Placing commpage and team user data somewhere at the top of the user accessible
virtual address space prevents these areas from conflicting with elf images
that require to be mapped at exact address (in most cases: runtime_loader).
This patch introduces randomization of commpage position. From now on commpage
table contains offsets from begining to of the commpage to the particular
commpage entry. Similary addresses of symbols in ELF memory image "commpage"
are just offsets from the begining of the commpage.
This patch also updates KDL so that commpage entries are recognized and shown
correctly in stack trace. An update of Debugger is yet to be done.
Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.
In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.
vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
x86_userspace_thread_exit() is a stub originally placed at the bottom of
each thread user stack that ensures any thread invokes exit_thread() upon
returning from its main higher level function.
Putting anything that is expected to be executed on a stack causes problems
when implementing data execution prevention. Code of x86_userspace_thread_exit()
is now moved to commpage which seems to be much more appropriate place for it.
When mmap() is invoked without specifying address hint B_RANDOMIZED_ANY_ADDRESS
is used.
Otherwise, unless MAP_FIXED flag is set (which requires mmap() to return an area
positioned exactly at given address), B_RANDOMIZED_BASE_ADDRESS is used.
When forking a process team user data area is not cloned but a new one is
created instead. However, the new one has to be at exactly the same address
parent's team user data area is. When process is exec then team user data
area may be recreated at random position.
This patch also make sure that instances of struct user_thread in team user
data are each in separate cache line in order to prevent false sharing since
these data are very likely to be accessed simultaneously from threads executing
on different CPUs. This change however reduces the number of threads process
can create. It is fixed by reserving 512kB of address space in case team user
data area needs to grow.
Randomized equivalent of B_ANY_ADDRESS. When a free space is found (as in
B_ANY_ADDRESS) the base adress is then randomized using _RandomizeAddress
pretty much like it is done in B_RANDOMIZED_BASE_ADDRESS.
B_RAND_BASE_ADDRESS is basically B_BASE_ADDRESS with non-deterministic created
area's base address.
Initial start address is randomized and then the algorithm looks for a large
enough free space in the interval [randomized start, end]. If it fails then
the search is repeated in the interval [original start, randomized start]. In
case it also fails the algorithm falls back to B_ANY_ADDRESS
(B_RANDOMIZED_ANY_ADDRESS when it is implemented) just like B_BASE_ADDRESS does.
Randomization range is limited by kMaxRandomize and kMaxInitialRandomize.
Inside the page randomization of initial user stack pointer is not only a part
of ASLR implementation but also a performance improvement that helps
eliminating aligned 64 kB data access.
Minimal user stack size is increased to 8 kB in order to ensure that regardless
of initial stack pointer value there is still enough space on stack.