This is nearly entirely up to the motherboard manufacturer. Implementations can vary widely even among models from the same company.
If you are lucky, the manual for the exact model board will have details.
Most commonly:
The additional lanes can be provided by the chipset, with some extra latency since they aren’t directly wired to the CPU.
The lanes can be routed with a chip on the motherboard which may limit how many lanes can be used simultaneously. For example, an M.2 slot will not be able to be used if the second x16 slot is populated etc.
Another thing to note is PCIe 3 has double the bandwidth of PCIe 2. PCIe 2 lanes can be provided by a 3.0 bus at 2:1 via a “down converter” chip, reducing the total number of PCIe 3 lanes needed.
The board could be using any combination of the above.
There is not necessarily a standard rationale behind how PCIe lanes are split among the individual components and interfaces on the board.
Basically there’s a limited supply of IO on the board, but how that gets rationed out relies on the board designer.
There are some real-world limitations like heat dissipation, physical connection distance, and signal interference that sway the manufacturers hand, but I believe that would be going way deeper than the original question intended.