Let's assume (and we know what assumptions do!) that the device works by measuring the time it takes for a pulse of light to travel from the laser to the salt and back to a sensor, e.g., a simple time/distance measurement.
Speed of light in a vacuum is about 30,000,000,000 cm/sec. Let's say the laser and sensor are mounted 10 cm (about 4") off the ground. The path length to the salt and back is therefore 20 cm, and the light (pulse) will require 0.667 nanoseconds (6.67E-10 sec) to traverse the 20 cm path.
Let's further say that the high limit is set to 20 cm, meaning the car has lifted 10 cm (about 4"). The light pulse will now require twice as much time as before, or 1.33 ns (1.33E-9 s), to traverse the 40 cm path (20 down, 20 back).
In order to detect this change, the controller will need to be able to resolve the difference between 0.667 ns and 1.33 ns, a difference of 0.66 ns. Accurate measurements usually require a resolution 10x better than what you are trying to measure, meaning that the controller might have to be able to resolve 0.066 ns. Furthermore, the jitter in the timing path must be substantially less than that amount (this is the really hard part).
To put this in perspective, 0.066 ns is the equivalent to about 15 gigahertz. Pretty demanding, and impressive if it can be made to work. Perhaps the device uses a more sophisticated measurement approach (Doppler?).
Meanwhile, if mounted to a car moving at 700 MPH (about 313 m/s), the car will have moved (313 m * 1.33E-9 sec) = 4.16E-7 meters, or about 0.016 thousands of an inch, while the laser pulse is meandering its way to the salt and back.
Good stuff. Curious to learn how the device actually works.