The technology conundrum

The technology conundrum
Published on

Two events occurred recently. The first, Google sacking one of its employees who claimed the artificial intelligence (AI) being developed by them was sentient. The other, a chess playing robot attacking its 7-year-old opponent who confused the machine by playing fast. The sacking could be attributed to either the employee divulging company secrets or promoting fake news. Whatever be the case, human-taught machines have the potential to harm, as the chess playing robot proved. The reason for this is simple, humans are unable to predict their individual behaviour or the behaviour of groups and those in it. Teaching machines to act or think like humans is no different from a non-swimmer teaching swimming in the deep end of the pool.

There are many ways to discuss the socio-economic evolution of the human-race   through tools and what powered them; through their acceptance and rejection; through the shift of what constituted luxury and necessity; through the alleged spare time these machines provide us; and even the catch-up that laws and society go through once these technologies become so embedded that their intended use loses relevance. Individuals being replaced by technology is another.

Anyone trying to reach a human on the other end of customer centre is accosted by a voice offering myriad options. Choosing an option is a frustrating trial and error loop. The technology wall built between customers trying to call the customer care representative of a company has been expanded to their websites too. Companies have deployed bots on their websites which they claim resolve complaints quicker. The bots are installed with basic solutions then trained on the fly through machine learning. The customer who interacts with the bot is doing the company a favour but not getting the problem resolved.

For the last few weeks, I have been interacting with what seems to be an unknown programme located deep in the bowels of the Facebook headquarters. Having not visited my page for a few months, I have been blocked. Part of the process of unblocking is validating an email id and then uploading some form of identity. The reply after a few days of uploading my ID was that it is not accepted. Several potential reasons are provided, but the actual reason is nowhere in the email from the Facebook ‘security team’. 

Email validation should have been enough. Any escalation should have been with a human on the other side checking my ID. The individual could have gone through the ID, realised my birthdate does not match with what I entered on Facebook, while my face does. Therefore putting 2 and 2 together the individual would have either unblocked my account or asked for reasons why the date of birth differs. 

Probably, the software on the other side works on a logical decision-making tree. Trying to make the software think like a human is going to need many more similar situations. But we humans are unique and therefore tend to come up with novel problems. In this situation, machines are going to spend a lifetime learning. A lifetime of learning, which human wouldn’t want that? This would be a fine thing were it not for the fact that humans are depending on them to solve problems. 

Recently, it was found that the self-drive programmes in Tesla cars are unable to identify children on the road. This is after Tesla has sold self-driving cars in the US. According to NPR, there are 830,000 Tesla cars with ‘Autopilot, "Full Self-Driving," Traffic Aware Cruise Control, or other driver-assist systems that have some control over speed and steering’. NPR reports there have been 273 road accidents involving Teslas in the last year. One is sure that after each accident the software and technology was ‘updated’.

It is interesting to note that humans using such technologies become guinea pigs. However, one is certain that no individual becomes a consumer knowing that being a guinea pig is part of the package. 

No business pushes ‘become a guinea pig’ as a freebie to promote its product. All products are burnished with promises of ‘safety’, ‘control’, state-of-the-art’, ‘improving lives’ and what have you. 

By buying into these technologies, individuals are not only purchasing a product but are unknowingly participating in its validation and upgradation programme. There is a cost borne by the individual and society for this. But not for the company selling the product. Every upgradation becomes an opportunity to highlight how the company is keeping with the times and how up-to-date the technology is.

Clinical trials are a key process in medical research. There are various stages to clinical research which are bound by morality, ethics, and oversite. Even after successful completion of these trials the drug must be approved by a government body. 

There is no such system for introducing new technologies into society, even though they have a major impact on individuals and society. Sure, companies like Google have ethicists for its AI research. But not much can be said about it when its employee Timnit Gebru, recognised as a leading ethicist on AI, was fired by the company in 2020. She was asked to leave because she refused to withdraw a paper, on the dangers of AI learning and reproducing racist and geographical bias communication, she co-authored.

Technology development, especially those that use software and try to imitate or replace humans, has become more of a social experiment. The frontier of such innovation is like the wild west.

Getting back to FB. I tried to access my account again and this time I was given the option of choosing three friends who could vouch for me. After they did so, I received an email from the ‘security team’ giving me the link to access my FB page and a password. When I clicked on the link, I got this message ‘Sorry, something went wrong. We're working on getting this fixed as soon as we can’. Just goes to show humans come through every time. 

(Samir Nazareth is an author and writes on socio-economic and environmental issues)

Herald Goa
www.heraldgoa.in