Big Tech, Services, and Responsibility11 Jul 2021 | 6 minutes to read
Currently, Google and other big tech companies like Facebook, Twitter, TikTok etc. deploys a variety of algorithms and machine learning models to assist in managing and moderating users. From surfacing search results to generating content recommendations, the algorithms acts as a proxy between us and what we request, silently guiding us who knows where. It’s this “guiding” part that has proved troublesome for Google and others (FANG, GAFAM, FAAMG, “BIG Tech”, etc).
Because the platform controls the presentation of content, governments and law makers have determined that the companies are responsible for the content. It has fallen on the companies to moderate the content, which is not an easy task. Just determining whether a piece of content is admissible can be difficult, and as we try and define the line more precisely it starts to create problems. See how Youtube has been removing recordings of war crimes that are used in legal cases. To absolve themselves from responsibility, they need to make the users themselves responsible for their own data and actions. Internally at Google, I think it would be framed as a financial move rather than a user privacy move. Heck, maybe the next step would then be to sell legal support to the same users and make even more money.
Because Google is in control of the various algorithms and the presentation of content, they seem to be responsible for moderating. This move could put the control of the algorithm in the hands of each user, in a “personal assistant” type way. Then the responsibility for content falls directly on the user. When users post objectionable content, law enforcement would go directly to the user’s “assistant” instead of needing to go through Google.
Google’s offerings encompass a wide variety of services. Not only are there services, but also large amounts of data behind those services. For example, Google Search is a service but there is also all the indexing data behind it that make it work. Maybe instead of offering a subscription to everything, they sell the services piecemeal with a one time payment (like Windows), and offer subscriptions to data to back those services. They could also sell service hosting, but the services would be sold in such a way that they could be deployed to any of the public clouds or one’s own private cloud.
I’m sure many of the algorithms used for Google services have machine learning models at their core. One issue posed by making each user distinct is that instead of having one single model, each user has their own personalized models. One problem is that it may be prohibitively expensive to train and manage that many models. Each user would start with a “base” model, but as they use and interact with it, it would start to learn more about the users mannerisms. Keeping a distinct model for each user, and continually updating it could potentially be extremely expensive. Another solution could be to have “tiers”, similar to how users get avatars in the “Metaverse” in the book Snow Crash, by Neal Stephenson:
The couples coming off the monorail can’t afford to have custom avatars made and don’t know how to write their own. They have to buy off-the-shelf avatars. One of the girls has a pretty nice one. […] Looks like she has bought the Avatar Construction SetTM and put together her own, customized model out of miscellaneous parts.
They flicker and merge together into a hysterical wall. […] Wild-looking abstracts, tornadoes of gyrating light—hackers who are hoping that Da5id will notice their talent, invite them inside, give them a job. A liberal sprinkling of black-and-white people—persons who are accessing the Metaverse through cheap public terminals, and who are rendered in jerky, grainy black and white.
Having the user’s pay Google for use of their services and data would allow Google to move away from advertising as the primary revenue stream. They could even still sell ads on it, where the services they offer still phone home to Google for ads to display.
Maybe a solution could be to run the ML models on the user’s device. In the beginning, this would be difficult since mobile phone aren’t the most powerful devices. Google already makes their own ML hardware (tensor processing unit TPU), and their own phone so all they would need to do is join the two together. Then Google would be able to place more custom hardware in the device (like a mTPU) so that the training and learning would take place on the user’s device instead of Google’s servers. This would also help separate responsibility a bit more. To accommodate older phones or to make it phone-agnostic, they could make it it’s own device that would connect to a user’s existing device through USB or maybe encrypted bluetooth or something. It could be designed as a little usb passthrough that would attach to a phone in a pleasing way (integrate with the case or something).
This is all just using Google as the example though. Under the guise of decentralization, companies could move their current offerings to become subscribable services, that the user pays for the use of. Users would be able to self-host their services (or pay an additional subscription to have it managed), and companies would be able to make money off the users. This would all be done to put the responsibility square on the user, while also making money off of them too. Additionally, by putting the tools of moderation and content control in the hands of users, people are able to form their own communities that are responsible for themselves. The trick is having a good enough user experience to coax people into using it.
What benefit would this bring to the user? For one, the models would be completely private to the user. It would allow people to place more trust in the services and know that big ol’ Google isn’t looking over their shoulder (as much…). The services would also be completely personalized to you, so the recommendations could be more exact. I think paired with the improved privacy, people would also be a bit more revealing of their true desires of the service. People may not self-censor as much when they have greater trust in the privacy of a service.
While there would be some benefits to the user, I think in the short term this would primarily benefit the big corporations. While they wouldn’t have as much direct control, they would be free of the responsibility for much of the content users create while being able to create a new revenue stream directly from users, instead of just through advertising.
Google just announced their new Pixel 6 phone, and as part of it also announced that Google Tensor would debut on the device.
Echoing what I wrote above, they noted:
AI is the future of our innovation work, but the problem is we’ve run into computing limitations that prevented us from fully pursuing our mission. So we set about building a technology platform built for mobile that enabled us to bring our most innovative AI and machine learning (ML) to our Pixel users.