AI implementation – Potential errors to consider

AI implementation – Potential errors to consider

By | 2019-02-13T16:16:52+00:00 August 30th, 2018|AI, Technology|0 Comments

AI implementation – Once you realize the potential that AI can bring to your business, and therefore develop an AI strategy, it is essential to consider possible mistakes that can arise when introducing Artificial Intelligence. Avoiding them can be decisive in determining whether the strategy works or not. Most of these potential errors companies can easily avoid.

AI is everywhere these days and everyone is talking about it. Companies present new applications based on Artificial Intelligence at ever shorter intervals. However, since this technology is still in its infancy, the results can sometimes go quite wrong. For companies, AI breakdowns can easily turn into PR disasters and cause considerable damage to the company.

For example, a Chinese company that built an autonomous AI-based nanny robot doomed with a prototype that got out of control at a children’s toy fair. The robot smashed a glass showcase and injured some visitors of the fair by flying shards. Another, probably even better known example of failing was the AI chatbot Tay from Microsoft, who communicated with Twitter community. However, some users fed Tay with nazi ideas and misogynistic contents. Thus, it became a nazi and sexism bot. Microsoft had to stop the experiment after only two days.

Decondia - Microsoft's AI chatbot Tay

These examples show that AI faults can cause considerable damage. It is advisable to be aware of at least the most serious mistakes that companies should avoid at all costs when AI implementation.

Errors in the introduction of artificial intelligence in companies:

Quality of data

An AI is only as good as the data it is fed with because it learns from this and eventually becomes “intelligent”. So if the data is already of low quality, you cannot expect that the AI will deliver outstanding results.

Apparent correlations

An AI that learns through machine learning simply does nothing else than finding correlations in the data. This is good. But now we humans are very good at confusing false correlations with causality. This can lead to strategic AI-based wrong decisions that could have a negative impact on the company.

In and out data

The quality of the data input is already decisive for the result of the AI output. If the data that AI uses for machine learning is poorly structured, the AI can use high-quality data. But the results simply won’t make sense. A simple example from the industrial sector. Imagine you want to monitor machines and detect potential AI defects early on to prevent production downtimes. Suppose an AI already works on a machine that provides 10 data parameters for monitoring. Now there is another machine in operation that functions differently and performs other tasks, but delivers the same 10 parameters. If you will assume that the AI developed for the first machine would be applied to the second machine, then the result would certainly be bad.

Decondia - In and Output Data for AI

Legal problems

Admittedly, this problem is one that the company cannot solve if it uses AI alone. After all, legal problems arising from the usage of AI are basically a challenge for the judiciary. This concerns, for example, the question of liability if an AI-supported application causes damage. It is also known as liability problem. A good example of this is accidents caused by autonomous vehicles. If an AI-supported vehicle injured a person – who is liable? Is it the vehicle manufacturer or AI developer? Or maybe the company that used the vehicle? These issues will lead to reforms in our judicial system. As far as you concern an AI project, you should at least consider what risks you are taking with AI if it does not go according to your plan.

AI implementation into the company structure

The decision to develop an AI strategy for the company also has consequences for the workforce. On the one hand, a successful AI can mean the loss of the job for some of your employees. Or at least a change of job responsibilities. AI-based structural changes in the company can also lead to a strained working climate. For example, employees might feel threatened in their positions or feel deprived of their competencies. Management should prepare for such potential problems of AI implementation and develop a strategy to avoid such tensions.

There are some things to consider if you want your AI implementation to be successful. Taking care to be well prepared for possible stumbling blocks or to avoid them in advance will at least increase the probability of success of this strategy.

Our previous blog post – The employees and AI

LEAVE A COMMENT