2242 Commits

Author SHA1 Message Date
Pawel Dziepak
024541a4c8 kernel: Improve rw_spinlock implementation
* Add more debug checks
 * Reduce the number of executed instructions that lock the bus.
2013-11-21 02:25:03 +01:00
Pawel Dziepak
308f594e2a kernel, libroot: Make scheduler modes interface public 2013-11-20 23:32:40 +01:00
Ingo Weinhold
c04f3a625a boot loader: Add safe mode blacklist submenu
It's a browser for the system package content, where entries can be
selected to blacklist them. The selected entries are removed from the
packagefs instance in the boot loader, so that e.g. selected drivers
won't be picked up. The paths are also added to the safe mode driver
settings and will be interpreted when the system packagefs instance is
mounted by the kernel.
2013-11-20 16:00:35 +01:00
Ingo Weinhold
6c7abe9829 boot loader: Menu[Item] API improvements
* Make Menu and MenuItem polymorphic.
* MenuItem:
  - Make SetMarked() virtual, so it can be overridden.
  - Add SetSubmenu() and Supermenu().
  - Delete the submenu in the destructor.
* Menu:
  - Add Entered()/Exited() hooks. They frame the time the user navigates
    the menu or any of its submenus. The hooks allow for subclasses
    populating their item list dynamically.
  - Add SortItems().
* Update boot loader menu copyright text to include 2013, now that it is
  over soon. :-)
2013-11-20 16:00:34 +01:00
Ingo Weinhold
435fb01509 DoublyLinkedList: Add Sort() 2013-11-20 16:00:34 +01:00
Ingo Weinhold
7e7f482590 SinglyLinkedList: Missing include 2013-11-20 16:00:34 +01:00
Pawel Dziepak
9c2e74da04 scheduler: Move mode specific logic to separate files 2013-11-20 09:46:59 +01:00
Ingo Weinhold
2fdd1d9ef1 khash: Move string hash functions to own header/source file
Unlike khash they shouldn't be phased out (only renamed).
2013-11-19 15:08:34 +01:00
Pawel Dziepak
d897a478d7 kernel: Allow reassigning IRQs to logical processors 2013-11-18 04:55:25 +01:00
Pawel Dziepak
955c7edec2 kernel: Measure time spent in interrupt handlers 2013-11-18 01:50:37 +01:00
Pawel Dziepak
6a164daad4 kernel: Track load produced by interrupt handlers 2013-11-18 01:17:44 +01:00
Pawel Dziepak
288a2664a2 scheduler: Remove sSchedulerInternalLock
* pin idle threads to their specific CPUs
 * allow scheduler to implement SMP_MSG_RESCHEDULE handler
 * scheduler_set_thread_priority() reworked
 * at reschedule: enqueue old thread after dequeueing the new one
2013-11-13 05:31:58 +01:00
Pawel Dziepak
829f836324 scheduler: Minor cleanup 2013-11-12 04:42:12 +01:00
Ingo Weinhold
e5f6591382 VM: vm_memset_physical(): Correct length parameter type 2013-11-11 22:27:52 +01:00
Pawel Dziepak
03fb2d8868 kernel: Remove gSchedulerLock
* Thread::scheduler_lock protects thread state, priority, etc.
 * sThreadCreationLock protects thread creation and removal and list of
   threads in team.
 * Team::signal_lock and Team::time_lock protect list of threads in team
   as well.
 * Scheduler uses its own internal locking.
2013-11-08 02:41:26 +01:00
Pawel Dziepak
72addc62e0 kernel: Introduce Thread::time_lock and Team::time_lock 2013-11-07 22:16:36 +01:00
Axel Dörfler
547cd462f8 trim: Added is_called_via_syscall() function.
* And use it in get_trim_data_from_user(), formerly known as copy_*().
* This fixes differentiating between user and kernel buffers.
2013-11-07 19:06:13 +01:00
Axel Dörfler
99086aa323 trim: Target SCSI UNMAP command instead of WRITE SAME.
* The UNMAP command is theoretically much faster, as it can get many block
  ranges instead of just a single range.
* Furthermore, the ATA TRIM command resembles it much better.
* Therefore, fs_trim_data now gets an array of ranges, and we use SCSI UNMAP
  to trim.
* Updated BFS code to collect array ranges to fully support the new
  fs_trim_data possibilities.
2013-11-07 19:03:47 +01:00
Pawel Dziepak
3519eb334a kernel: Change Thread::team_lock to rw_spinlock 2013-11-07 04:20:59 +01:00
Pawel Dziepak
defee266db kernel: Add read write spinlock implementation 2013-11-07 04:20:32 +01:00
Pawel Dziepak
d3e5752b11 scheduler: Performance mode is actually low latency mode 2013-11-07 01:50:20 +01:00
Pawel Dziepak
83983eaf38 kernel: Remove Thread::alarm 2013-11-07 01:40:02 +01:00
Pawel Dziepak
aa4aca0264 kernel: Protect signal data with Team::signal_lock 2013-11-07 01:32:48 +01:00
Pawel Dziepak
73ad2473e7 Remove remaining unnecessary 'volatile' qualifiers 2013-11-06 00:03:07 +01:00
Pawel Dziepak
273f2f38cd kernel: Improve spinlock implementation
atomic_or() and atomic_and() are not supported by x86 are need to be
emulated using CAS. Use atomic_get_and_set() and atomic_set() instead.
2013-11-05 22:47:18 +01:00
Pawel Dziepak
077c84eb27 kernel: atomic_*() functions rework
* No need for the atomically changed variables to be declared as
   volatile.
 * Drop support for atomically getting and setting unaligned data.
 * Introduce atomic_get_and_set[64]() which works the same as
   atomic_set[64]() used to. atomic_set[64]() does not return the
   previous value anymore.
2013-11-05 22:32:59 +01:00
Pawel Dziepak
f4b088a992 kernel: Protect UserTimers with sUserTimerLock 2013-11-05 05:36:05 +01:00
Pawel Dziepak
4824f7630b kernel: Add sequential lock implementation 2013-11-05 04:16:13 +01:00
Pawel Dziepak
3c819aaa72 kernel: DPC: remove schedulerLocked argument 2013-11-04 23:51:18 +01:00
Pawel Dziepak
11cacd0c13 kernel: Remove thread_block_with_timeout_locked() 2013-11-04 23:45:14 +01:00
Jérôme Duval
e2183a14c4 Increased kernel stack size by another page for 64-bit
* USB boot now works on x86_64 with PM.
2013-11-04 18:53:49 +01:00
Pawel Dziepak
d8fcc8a825 kernel: Remove B_TIMER_ACQUIRE_SCHEDULER_LOCK flag
The flag main purpose is to avoid race conditions between event handler
and cancel_timer(). However, cancel_timer() is safe even without
using gSchedulerLock.

If the event is scheduled to happen on other CPU than the CPU that
invokes cancel_timer() then cancel_timer() either disables the event
before its handler starts executing or waits until the event handler
is done.

If the event is scheduled on the same CPU that calls cancel_timer()
then, since cancel_timer() disables interrupts, the event is either
executed before cancel_timer() or when the timer interrupt handler
starts running the event is already disabled.
2013-10-31 01:49:43 +01:00
Pawel Dziepak
c8dd9f7780 kernel: Add thread_unblock() and use it where possible 2013-10-30 03:58:36 +01:00
Pawel Dziepak
9c0ff0eed1 kernel: Add cpufreq module for Intel P-states
Since Sandy Bridge managing P-states on Intel processors is much easier
and more powerful than when using previous versions of EIST.
2013-10-30 00:55:03 +01:00
Pawel Dziepak
22d8248267 kernel: Add support and interface for cpufreq modules 2013-10-30 00:48:07 +01:00
Julian Harnath
7f64b301b1 Reduce lock contention in kernel port subsystem.
* Replace ports list mutex with R/W-lock.

* Move team port list protection to separate array of mutexes.
  Relieve contention on sPortsLock by removing Team::port_list from its
  protected items. With this, set_port_owner() only needs to acquire the
  sPortsLock for reading.

* Add another hash table holding the ports by name. Used by find_port()
  so it doesn't have to iterate over the list anymore.

* Use slab-based memory allocator for port messages. sPortQuotaLock was
  acquired on every message send or receive and was thus another point
  of contention. The lock is not necessary anymore.

* Lock for port hashes and Port::lock are no longer locked in a nested
  fashion to reduce chances of blocking other threads.

* Make operations concurrency-safe by adding an atomically accessed
  Port::state which provides linearization points to port creation and
  deletion. Both operations are now divided into logical and physical
  parts, the logical part just updating the state and the physical part
  adding/remove it to/from the port hash and team port list.

* set_port_owner() is the only remaining function which still locks
  Port::lock and one or two of sTeamListLock[] in a nested fashion.
  Since it needs to move the port from one team list to another and
  change Port::owner, there's no way around.

* Ports are now reference counted to make accesses to already-deleted
  ports safe.

* Should fix #8007.
2013-10-26 16:10:03 +02:00
Pawel Dziepak
978fc08065 scheduler: Remove support for running different schedulers
Simple scheduler behaves exactly the same as affine scheduler with a
single core. Obviously, affine scheduler is more complicated thus
introduces greater overhead but quite a lot of multicore logic has been
disabled on single core systems in the previous commit.
2013-10-24 02:04:03 +02:00
Pawel Dziepak
ed8627e535 kernel/util: Fix MinMaxHeap::_GrowHeap() 2013-10-24 00:59:58 +02:00
Pawel Dziepak
31a75d402f kernel: Protect lock internals with per-lock spinlock 2013-10-24 00:01:18 +02:00
Pawel Dziepak
4c4994435d kernel/util: Fixes in [MinMax]Heap implementation 2013-10-22 23:56:31 +02:00
Pawel Dziepak
5cf9b69b49 kernel/util: Minor improvements in Heap and MinMaxHeap
* [MinMax]Heap::ModifyKey(): Do not attempt to move node if the key
   actually hasn't changed.
 * Allow allocating initial array at construction.
2013-10-21 21:24:05 +02:00
Pawel Dziepak
7ea42e7add kernel: Remove invoke_scheduler_if_idle 2013-10-21 02:38:57 +02:00
Pawel Dziepak
ea79da9500 kernel: Remove support for thread_queue 2013-10-21 02:30:20 +02:00
Pawel Dziepak
cd8d4e39fd kernel: Introduce scheduler modes of operation 2013-10-21 02:17:00 +02:00
Pawel Dziepak
343c489689 kernel: Create CPU topology tree 2013-10-21 01:33:35 +02:00
Pawel Dziepak
5cbf227236 kernel/util: Allocate only one array in MinMaxHeap 2013-10-20 23:33:55 +02:00
Pawel Dziepak
18c0d163ed kernel/util: Add MinMaxHeap implementation 2013-10-17 19:22:29 +02:00
Pawel Dziepak
278c9784a1 scheduler_affine: Use global core heap and per-core CPU heaps
There is a global heap of cores, where the key is the highest priority
of threads running on that core. Moreover, for each core there is
a heap of logical processors on this core where the key is the priority
of currently running thread.

The per-core heap is used for load balancing among logical processors
on that core. The global heap is used in initial decision where to put
the thread (note that the algorithm that makes this decision is not
complete yet).
2013-10-17 02:11:28 +02:00
Pawel Dziepak
cf863a5040 kernel: Decide whether to use simple or affine scheduler
Simple scheduler is used when we do not have to worry about cache affinity
(i.e. single core with or without SMT, multicore with all cache levels
shared).

When we replace gSchedulerLock with more fine grained locking affine
scheduler should also be chosen when logical CPU count is high (regardless
of cache).
2013-10-16 18:39:25 +02:00
François Revol
bc1184c253 bootloader: Add an arguments_count field to stage2_args
Some boot platforms pass a non-NULL-terminated list of args
to the loader, so store the count here to avoid having to copy
the list itself.
2013-10-15 22:15:03 +02:00