6 Comments

Wink if they also listen to our conversations.

Expand full comment

This transition you are describing - to targeting and optimization happening on devices, using tons of locally stored data, which never leave the devices - reminds me of the thesis in "The Origin of Modern Consciousness and the Breakdown of the Bicameral mind."

(Pause, for "bong rip" sound effect)

The "bicameral mind" hypothesis was that people used to hear voices of 'the gods', and just took these literally, until sometime around the bronze age collapse, people realized these voices came from inside their heads. The argument sounds like cities and large scale human structures were initially "computed in the cloud" - as a result of these stories that bounced around, with cloud servers replaced by the cultural superorganism.

Moving all this optimization / computation data onto the devices (but still under cloud control!) starts to make the mobile devices themselves look ever more like people. The device looks even more like an extension of our own brains, this time with _somenone elses'_ values system encoded in it.

I would think one implication of all of this is a push to own more and more hardware.

Expand full comment

The architecture you describe is similar to that of Tesla's FSD. All the data/compute remains resident on the car, and is not transferred to Tesla (in this case it's not because of Tesla's concern for privacy, it's simply a matter of latency.) The driver does sign up to allow Tesla to interrogate the car's data for FSD debugging and training purposes, in which case - and in connection with the specific event/accident - Tesla can identify the car (which functionally means the driver as well). Nonetheless it proves that all the compute necessary to run incredibly complex multi-domain ML processes can be localized and miniaturized (in comparison to the giant compute facilities Waymo et all install in their cars.)

Expand full comment
author

Interesting! I knew vaguely that Tesla built its own silicon to handle the FSD locally, but didn't realize it was such a total on-car displacement of the compute. Waymo does the same....but just not as efficiently?

Expand full comment

Right. Tesla have done an incredible job of providing a powerful, integrated, fault tolerant stack with extremely low power consumption in-car. Waymo's stack is huge, legacy and limited (geo-fenced, so localized data, with enormous numbers of data points mapped). Takes a formidable amount of power and space!

Expand full comment

You dont really have to. If the behavior is a sequence of events, you can onehot encode them they will be un readable except by the ml model you run them through and youll have all the data you need esp if the first layers are computed on the phone itself.

Expand full comment