run_external_process() contained pipe file descriptors leaks (e.g. one
pipe end was never closed). Also the stdout might have been captured
incomplete, since only a single read() was performed on the pipe.
Furthermore should a child process try to write a larger amount of data
onto the pipe then it will become stuck, because the parent process
isn't consuming the data. Thus the timeout would trigger in these cases
although the child process does nothing wrong.
This commit changes the implementation to follow a select() based
approach that continually reads from the pipe, but discards data that
doesn't fit in the provided buffer.
If there is no null byte among the first n bytes of the source the
resulting string will not be properly null terminated.
Ensure that all strings that are copied via strncpy are properly
terminated copy "sizeof (dest) - 1" bytes and manually terminate
the string in the cases the array was not initialized.
Example compiler warning:
../daemon/gamemode-tests.c: In function ‘run_cpu_governor_tests’:
../daemon/gamemode-tests.c:326:4: warning: ‘strncpy’ specified bound
256 equals destination size [-Wstringop-truncation]
strncpy(defaultgov, currentgov, CONFIG_VALUE_MAX);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This should hopefully fix issue #113, where the user was in the common situation of having an intel and an nvidia GPU, at card0 and card1 respectively, but the nvidia gpu was [gpu:0] according to the driver
The drm device ID doesn't match up with the pci device as expected, spotted by issue #113
I can't test this myself just yet, will need data from the user in question to verify PCIDevice is the right value
This also fixes the instances in testing where we don't have the nv overclock in use, but we do have the mode set
Solves issues explaining the what the perf_level actually meant, and future proofs for any PR that wants to set individual perf levels
This covers the MVP for now, and simply allows pinning the power level to "high"
Full overclocking set up is somewhat more complicated, and it'll be better to implement that at the same time as the same for Nvidia, where we're currently only really setting the top end power level