Schema of a neural artificial network used for machine learning
Picture Credit: futurehumanevolution.com
Google’s strategy to reach AI supremacy
To reach AI supremacy, Google has implemented the world’s most powerful network of data centers across the globe. With its own AI-optimized hardware design, Google currently provides services to 2 billion active Android devices, 1 billion You Tube users watching 1 billion hours of video every day and responding to over 1 million search inquiries per second. Google Maps provides road information along 1 billion kilometers across the planet and Google Translate processes millions of language translation requests each day. In Europe Google’s flagship service ‘search’ has now a market share of 90%.
With this infrastructure in place Google is making a major strategy shift. In an earnings call late last year, Google CEO Sundar Pichai laid out the corporate mindset: “Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything. We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we’re in early days, but you will see us — in a systematic way — apply machine learning in all these areas.” ‘AI-first’ as this initiative is called will bring AI and deep learning to all products and services Google is offering.
To support this drive Google shows a much bigger commitment to produce its own hardware. At the Google developers’ conference early October this year the company introduced a new family of intelligent products all made by Google: new Pixel smart-phones, Google Home Mini and Maxi bidirectional loudspeakers, a new Pixelbook laptop, Google Clips hands-free intelligent camera as a follow-up to Google Glass, Google Pixel Buds earphones and an updated Daydream View headset. Across all these devices, one can interact with the Google Assistant enhanced by a number of services such as Google Translate which is capable of translating text or speech into over 100 languages in real-time or Google Smart Reply which provides text suggestions to answer incoming E-Mails.
As Google is incorporating machine learning in all its products, it needs engineers who master the necessary skills. “The more people who think about solving problems in this way, the better we’ll be,” says Jeff Dean, head of Google Brain and one of the most distinguished machine learning scientists at Google. He estimates that of Google’s 25’000 engineers, only a “few thousand” are proficient in machine learning, maybe ten percent. He’d like that to be closer to a hundred percent. Consequently Google is taking enormous steps to reeducate their engineering workforce.
As these services are provided at no charge, Google in return amasses huge amounts of personal data which is used for profiled advertising. Of Google’s staggering revenue totaling USD 90 billion in 2016, about USD 70 billion were contributed by advertising income. This market power has recently turned into an antitrust-issue in Europe. On June 27, 2017 the European Union handed Google a record-breaking €2.42 billion fine for abusing its dominance of the search engine market in building its online shopping service, a dramatic decision that has far-reaching implications for the company. By artificially and illegally promoting its own price comparison service in searches, Google denied both its consumers real choice and rival firms the ability to compete on a level playing field, European regulators said. Google immediately rejected the commission’s findings, and signaled its intention to appeal in court. However, Google also indicated it would comply with Europe’s demands to change the way it runs its shopping search service, a rare instance of the internet giant bowing to regulatory pressure to avoid more fines.
Possibly to overcome the reputation damage this fine is causing, Google has announced the formation of an ethics unit which, in the long run, could foster its goal to achieve AI supremacy despite the antitrust market dominance issues.
AI and Ethics
Ethics issues have caused serious problems to the credibility and image of the AI service industry for quite some time. In response in September, 2016 Google, IBM, Amazon, Facebook and Microsoft formed the alliance ‘Partnership on AI’ with the goal to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement regarding AI and its influences on people and society. Later Apple joined the alliance as well but so far little substance has emerged from this alliance other than the presentation of a broad spectrum of topics to be touched such as:
- Social and societal influences of AI: to monitor AI advances that interconnect with people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. While AI technologies that personalize information and assist people with recommendations can provide valuable assistance, these technologies could also inadvertently or deliberately manipulate people and influence opinions.
- Fair, transparent, and accountable AI: to provide societal value by recognizing patterns and drawing inferences from large amounts of data. While such results promise to provide great value, we need to be sensitive to the possibility that there are hidden assumptions and biases in data.
Experience has shown that the underlying algorithms to profile an individual can indeed be biased. In the US there have been wide discussions that racism has been built into algorithms to process credit applications with black individuals having difficulty getting credit compared to whites. Some of these problems have been corrected; however the questions remains how decision-making based on machine learning knowledge can comply with ethical standards and how these standards are incorporated into machine learning algorithms. The availability of face recognition and emotion sensing has compounded this problem as AI technology begins to get access to intimate behavioral data.
Google’s new Ethics Unit
In a bold move to take leadership in the discussion about ethics, DeepMind, Google’s subsidiary, also known for its AlphaGo software that beat the world’s best Go player, has announced the formation of a major new AI research unit called ‘Deep Mind Ethics and Society (DMES)’ comprised of a full-time staff and external advisors. On Deep Mind’s website the reason for setting up DMES is explained with the following ambitious mission statement:
“At DeepMind, we’re proud of the role we’ve played in pushing forward the science of AI, and our track record of exciting breakthroughs and major publications. We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work. As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on society—and on all our lives—is not something that should be left to chance. Beneficial outcomes and protections against harm must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.”
The unit is currently made up of eight DeepMind staffers and six external, unpaid fellows, among them Nick Bostrom, Professor at Oxford University and Director of the Future of Humanity Institute, James Manyika, Senior Partner at McKinsey and Chair of the McKinsey Global Institute and Jeffrey D. Sachs, Professor of Economics, Director of the Center for Sustainable Development at Columbia University and senior UN advisor.
The full-time team within DeepMind will grow to around 25 specialists within the next 12 months. DMES will work alongside technologists within DeepMind and fund external research focusing on issues which have been formulated by the alliance ‘Partnership on AI’. Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.
Conclusion
So far no other organization has the necessary funding and talents to plot the road towards an AI which incorporates ethical standards as set forth by the mission statement of Google’s new Ethics Unit. With the ongoing progress in AI technology, ethics will become one of the major topics in defining how humans and intelligent machines can successfully collaborate without threatening human existence. Our current legal framework of antitrust does not regulate the impact of ethics built into products and services. We are entering a scenario where access to AI will be as vital to our existence as access to clean water, air or energy. Google’s new Ethics Unit might just provide the path that leads Google to the goal of AI supremacy regardless of antitrust regulations. As a result, Google with its enormous financial resources and talents might become the company that provides the best products and services enhancing our biological intelligence while at the same time our dependency on these services steadily grows with no alternative in sight. Perhaps a new economic theory is needed to avoid a social conflict of interest as Singularity is approaching us.
Very well written. Also top important what is tracked and engaged with Singularity 2030. Below brief explanation why I’m intresested in AI and so you can relate my appreciation and encouragement of your work.
FYI: I kind of can size what is coming with AI. I’m 60 and closely lifed via family kids growing-up with ever more tech and my seamless interest and curiosity also via IT (math.) career. Started coding at ETHZ (PascalS and Fortran was also the 1st time math. and Information technology was offered at the ETH), after 3 full year study married and left to work at Swissair Reservation systems (at that time the only place besides some banks to work on IT. Then, since 1989 with Amadeus (GDS) tech IT (Senior Manager). In recent years I checked-tried all OpenSource AI stacks and clearly saw the broad lead in scale of Google with Tensorflow.
Humanity will require very strong Engineering (companies), most politics are to much backdated or overrun (*) only use AI to win election campaigns. Your Singularity 2030 is one of the only places I see this challenge outlined and investigated.
(*) I’m still pretty exited about what would be possible if knowledge is well applied, the pace is fantastic (I remember when queuing to get my code punch cards read via a huge IBM mechanical card reader (ETHZ), then getting back to pick-up the the z-fold paper output (some with errors 😉 ),..later.., end 2016 installing Tensorflow Android native app on a 8Core 4GB (ASUS, 250 Euro phone), same as today any kid can do it (Google has the best usability of AI stacks one key element to scale.) Pretty amazing times and greatful to be part of it. I wish you good success with your various initiatives, research and writing. It will be key to gear AI domination to the wealth of humanity and not only one (or 2) global company.
(FYI I maintain some bookmarks on private site: https://noulloc.com/index.php/tech-bis )
Greetings Hannes