U.S. Tech Giants Team Up to Ensure A.I. Is Developed Ethically and Safely

With any new technology comes associated risks, the important thing is to mitigate these risks early and stay proactive instead of reactive. Protecting client data, using the best safety practices and keeping apprised of the newest safety precautions available to web developers are just some of the steps we take to stay safe, not only for ourselves, but for our clients. Some of the biggest names in the world have come together to form a new organization in a bid to ensure that artificial intelligence (AI) is developed safely, ethically, and transparently. Take a look at their, seemingly, foolproof plan and see how it actually comes together.

Google, Facebook, Amazon, IBM, and Microsoft have come together to form a new organization in a bid to ensure that artificial intelligence (AI) is developed safely, ethically, and transparently.
The organization — announced on Wednesday and known as the Partnership on Artificial Intelligence to Benefit People and Society, or simply Partnership on AI — will aim to address some of the challenges that AI presents to people and society, while also figuring out how humanity can best take advantage of new technologies in the field, which has advanced rapidly in the last few years.
The Partnership, unveiled by AI executives from the founding companies during a telephone briefing with journalists on Wednesday, said it will carry out research and recommend how others in the field of AI should go about developing their systems.
During the call, the consortium dismissed concerns that this was effectively an attempt by the tech industry to self-regulate AI without government involvement.
AI has been tipped to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. But it’s also been described by the likes of renowned scientist Stephen Hawking and tech billionaire Elon Musk as something that could wipe out humans altogether.
Ensuring AI’s positive impact
Programming a machine to learn from its environment in a particular way is now the job of many software developers around the world. But some people are concerned that these self-aware machines could be programmed in a way that allows them to develop intelligence that could be used to cause harm, while others, such as Oxford University professor Nick Bostrom, think super intelligent machines will outsmart humans within a matter of decades.
Mustafa Suleyman, cofounder and head of applied AI at Alphabet-owned research lab, DeepMind said it’s “critical” for technology companies to start engaging with the public on how they’re developing AI. “The positive impact of AI will depend on the level of public engagement,” he said, adding that the new organization will complement Google’s own AI ethics board, which was created after the search giant acquired DeepMind for a reported £400 million in 2014, but has been kept under wraps ever since.
Yann LeCun, director of AI research at Facebook, and Ralf Herbrich, director of machine learning science and core machine learning at Amazon, said AI has the potential to improve the lives of millions of people, while also stating how crucial the technology is to the future of their platforms.
“As researchers in industry, we take very seriously the trust people have in us to ensure advances are made with the utmost consideration for human values,” said LeCun. “By openly collaborating with our peers and sharing findings, we aim to push new boundaries every day, not only within Facebook, but across the entire research community. To do so in partnership with these companies who share our vision will help propel the entire field forward in a thoughtful responsible way.”
The Partnership will be funded and supported by the founding companies, who actually compete with one another across many other parts of their businesses. The location of the organizations’ headquarters are yet to be decided, as are job roles and staff numbers, although decisions on these matters are expected to be made in the coming weeks.
Through the Partnership on AI consortium, the US tech giants will carry out AI research into areas like:

  • ethics, fairness and inclusivity
  • transparency, privacy, and interoperability
  • collaboration between people and AI systems
  • and the trustworthiness, reliability and robustness of the technology.

There will be equal representation of corporate and non-corporate members on the Partnership’s board, which will meet on a yet-to-be determined basis. Academics and non-profits will also be invited to join, as will other organizations looking to monitor the development of AI, such as Elon Musk’s OpenAI. The Partnership said it is already in membership discussions with the likes of the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence (AI2).
“In the coming weeks and months we’ll be announcing non-corporate members of our Partnership and their roles,” said Suleyman.
8-point plan
Murray Shanahan, professor of cognitive robotics at Imperial College London, endorsed the formation of the Partnership, saying: “A small number of large corporations are today the powerhouses behind the development of sophisticated artificial intelligence. The inauguration of the Partnership on AI is a very welcome step towards ensuring this technology is used wisely.”
Yoshua Bengio, a professor at the University of Montreal and scientific director at IVADO, an organisation that aims to partner industry professionals and academic researchers, added: “Bringing together the major players in the field is the best way to ensure we all share the same values and overall objectives to serve the common good.”
The Partnership on AI shares the following tenets:

  1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
  2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
  3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
  4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
  5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
  6. We will work to maximize the benefits and address the potential challenges of AI technologies, by:
    • Working to protect the privacy and security of individuals.
    • Striving to understand and respect the interests of all parties that may be impacted by AI advances.
    • Working to ensure that AI research and engineering communities remain socially responsible, sensitive and engaged directly with the potential influences of AI technologies on wider society.
    • Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
    • Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
  7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
  8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.

Source: Inc.com September 29, 2016