The future of AI can be bright, once the myths around it are dispelled
With the rising ethical concerns around artificial intelligence, people often don’t acknowledge one thing — we’re responsible for those we tame. The never-ending hunger for solving the mystery of AI has been haunting scientists for decades. The first computer scientist to start questioning whether machines can think was Alan Turing — this was at the time when general purpose computers had only just started being built. Later, in 1956, John McCarthy coined the term of artificial intelligence and began the era where people started building relationships with machines, where they teach them how to learn and think.
Due to the fast pace of development in AI, today it is equally exciting and intimidating to find out what lies ahead. From being a theory, AI evolved into research, then into experimentation, and today, we can witness how machine learning affected the wide range of working processes — from digital to human labor.
The increasing exploitation of AI can be explained by its great value for the automation of processes, and its recent popularity was caused by its commercial potential. Instead of developing gradually, over the last couple of years, AI has been growing exponentially — another cause of fear associated with robotics. In the course of only a few years, it has moved from improving a web search, to machines and products that can be used across all industries, thanks to deep learning — a combination of programming and cognitive science.
Deep learning allows a machine to learn by itself, for example, if a machine is repetitively shown images of an apple, it will eventually learn what the apple is. The purpose, however, is not making machines think like humans, but better and faster than them.
According to Fortune, Microsoft and Google have already beaten the best human at image recognition, whereas Chinese company Baidu managed to outperform people in recognizing two languages. When businesses started to realize the so far limitless capabilities of AI, they have used it to boost production, development of new products, and solving problems that humans can’t. But if the development continues at the same pace, where is the place for people and their jobs?
Even though AI is still far from learning the frightening self-awareness predicted in many sci-fi films, machines can still compete with humans in the job market. In the manufacturing and agricultural sectors, where smart machines are used as a more productive and efficient alternative to human labor. Additionally, there are expectations that robotics will soon evolve to the point where ‘intellectual labor’ could be replaced too.
In today’s business and economic environments, compliance is on the list of obligations that companies need to be careful with. The recent scandal with Wells Fargo and its corporate culture and management failure can encourage others to question the reliability of employees. Compliance involves often arduous and tedious amounts of sifting through transactions where making a mistake can lead to serious consequences. In order to avoid trouble, businesses have started using machines equipped with natural language processing systems capable of replacing ‘knowledge’ jobs associated with compliance, in all relevant business sectors.
Due to AI not developing gradually, machine learning tends to have its winters and droughts too, as before it started booming in the 2010s, the breakthroughs were only occasional. Stanford University has made an attempt to estimate the state of AI in 2030 and concluded that it’s highly unlikely that AI will deliver a life-changing product or threaten humanity. So, apart from enjoying the current advancements in machine learning, deep learning enthusiasts and businesses need to be careful with their hopes and expectations as AI may simply under deliver on expectations. The importance of relevant legislation to regulate the current state of AI, though, is important, even if it’s not going to destroy the planet anytime soon.
The social and ethical impact of AI will still shake up the world. The technology is at the crossroads of either continuing to benefit society or turning to a path of harming it. From the governments’ perspective, the vision and understanding of AI varies significantly from one country to another, with some preferring to wait for the life-changing advancements, and others already preparing. The UK decided to take the latter direction with London being one of the biggest tech hubs in the world, where AI development is very active.
The UK’s Science and Technology Committee has recently prepared a report on AI and Robotics, where members thoroughly discussed its societal impact in the near future. The government acknowledges that technology is going to disrupt the way people live and work, so authorities attempted to understand to what extent it’s going to happen: ‘While we cannot yet foresee exactly how this ‘fourth industrial revolution’ will play out, we know that gains in productivity and efficiency, new services and jobs, and improved support in existing roles are on the horizon, alongside the potential loss of well-established occupations,’ report says. The highlights of it included the necessity to up-skill or re-skill the nation if necessary, in order to minimize any potential harm from technology. The recommendation is to establish a standing Commission on Artificial Intelligence, based in the Alan Turing Institute, to examine the social, ethical, and legal implications of AI developments.
Any fear of new technology usually comes from the lack of transparency and exposure on the subject. In order to avoid further intimidation in society particularly regarding job loss, information on developments, initiatives, and study findings should be available to educate society. Rob McCargow, Artificial Intelligence Leader at PwC UK shared with us that this exposure shouldn’t be limited to certain countries, so people and business can benefit from achievements in robotics globally: ‘Until now, the technology sector and higher education institutions have taken AI innovation and standards into their own hands. This is a first step in the right direction to involve Government. We need to ensure all parties come together to develop the necessary regulation for building trusted and transparent AI systems to support future economic growth1. Having the right standards in place is essential to take advantage of AI for the good of humankind — AI has no regard for international borders so we need a coherent global approach to regulation.’
Unlike in the movies, technology is not yet responsible for any of its actions and impacts, so it’s in people’s power to guide and direct AI.