Today, ethical concerns in both the use of artificial intelligence and the development of models are no longer a debate for the future, but a real, critical concern of the present. Almost all AI systems today have the potential to produce biased results, and the technology that guides data mining and classification is not neutral. In this article, we outline some innovative proposals and ideas on how to move towards the development of increasingly ethical and reliable AI-focused projects. Join us on this fascinating topic!
As we pointed out in a previous 7Puentes post, the intensive use and implementation of AI in various fields – from healthcare to manufacturing, oil, mass consumption, finance, or any other productive sector – can improve the efficiency and quality of what is offered, but it can also raise ethical concerns, such as privacy, discrimination, security or reliability of algorithms, and bias in data.
In our more than 15 years of experience working on projects related to AI and the development of machine learning models, we have found that the ethical issue continues to be a central aspect on which companies need to focus their attention and intervention.
What we mean by ethical AI and how to build fairer models
But what does ethics in AI mean? That the models are ethical? That the use of the models is ethical?
- The problem of bias: First of all, we have to understand that a model works with data, that these data are statistical, and that they must fairly and equitably represent the entire population under consideration, with all its differences and peculiarities. As an example, I suppose that for our recruitment area we have developed an automatic reader of candidates’ CVs, trained with the data of the candidates. It could be very good at selecting the best candidates from a set of 20 resumes, but we see that the model has a discriminatory bias because it favors men over women, and there is virtually no data on women for executive positions. So this model is going to be biased when it comes to qualifying or evaluating a candidate, this is going to be a statistical bias because the information in the model is biased, but because reality is also biased with this «glass ceiling» where women are much less likely to rise to leadership positions than men. In that case, there is a key question: Which do we prefer, a great resume screener that is biased because it has few women, or no women for important positions? Or a model that is less accurate but more egalitarian? It is necessary to think about these questions because they are part of the core business of the model we are developing, its quality, robustness and reliability. How can we guarantee more accurate and fairer results? Can we report biased search results? What would or should be the exact representation of women in search results?
- The problem of explainability and robustness: Another common example that arises in models that use AI is autonomous driving, especially autonomous cars, which are already in use as a means of transportation in various countries and are being increasingly regulated. It is clear that ethics are necessary for programming the self-driving car, but also for autonomous decision making. The big problem is what ethics to use, how to program it, or how to distribute responsibility in case of an accident. Who is to blame when a hit-and-run occurs? The challenge is to choose which of the ethics is desirable. Do we minimize the damage? Which, the driver’s or the other users’? These and thousands of similar questions must be answered in order to program thousands of cars to drive through cities. But if these decision models are opaque, if the data (or metadata) used to train them cannot be understood or explained, we are at serious risk. To shed some light on the debate, MIT programmed an exercise called «Moral Machine» in which viewers were presented with a series of scenarios and asked to choose one.The result? There is no one ethic; there are many ethics. And there is no one that is better. There is one that is more regionally accepted. But there is no universality, and the problem lies in the blame for the accident. Is the engineer who built the car responsible, or is the blame spread among the various actors involved in the accident? In the case of explainability, an algorithm is explainable if it can be interpreted and understood how it arrived at its predictions or results. This is a critical feature because these tools make calculations based on large amounts of interrelated data, and the calculations can be very simple or extremely complex. That is, understanding how the autonomous car made a decision to drive over a pothole and cause material damage to the car, rather than avoid the pothole and run over a dog. And all of that data from the models has to be audited by companies to make sure that there is no bias and that the data that they were trained on is reliable. On the other hand, robustness has to do with making the models so secure that they cannot be hacked. This is not only the case with autonomous cars, which can have security breaches, but also with smart cars, which also have their controls that can be violated, such as brakes or steering. So a model can be super ethical, but the robustness aspect implies that I can disturb the data of the model and confuse it. So the model has to be sufficiently robust. In fact, a few years ago, the American justice system found a woman guilty of involuntary manslaughter for driving an Uber car with its self-driving features disabled in March 2018. This case was very complicated, but the driver was found guilty of having «turned off» the «mind» of the semi-autonomous car. And if it is on, what other damage could it cause?
- The privacy issue: And the last important aspect is privacy. If the machine learning model has been trained with information that is not public, that is private and that violates certain intellectual property rights, we are in trouble. For example, given the tremendous advances in biomedicine, at some point we will be able to sequence all genomes, which is personal information about the human genome, but that data is the property of each individual. When the biomedical model is integrated with a lot of patient or user data, there will come a time when the model cannot go back to determine whether there is private, sensitive, or non-private data. Another example is an assistant model to be a psychologist or therapist with AI, where it will use transcripts of real therapies with people who have not given consent for that purpose. This is an aspect where all these developments will start to meet and be discussed. Maybe tomorrow we as citizens will give our consent for this private data to be used in AI training if it benefits us in some way. In the meantime, the debate is open. And this is where the question of how doctors will continue to be able to diagnose with the help of AI, how the source of employment will change, and also their legal responsibility comes into play. The question that remains is which jobs or positions can be replaced by AI and how many jobs could be lost?
Data Scientists, Ethics and the Oil & Gas Industry
During 7Puentes’ participation in the Energy Sector Conference in Houston, we noticed this concern regarding the ethics and reliability of AI models. Although the sector is not threatened by the loss of jobs – as they take care of their own sources of employment – the issue of the robustness, reliability and security of the models is clear.
Although privacy is not so much an issue because there is more infrastructure data than people, the problem lies in how AI models behave: energy companies classify the level of risk in daily operations, the risk of accidents, or the safety of facilities, so the question is who is responsible if the model misclassifies these risks. And, of course, there is no verification of these biases in the model.
This is clearly different from the responsibility of data scientists, who do not need a license or legal signature to act. Today, engineers or architects sign and approve the blueprint of a building. As data engineers, we still do not sign for responsibility. But it is something that needs to be asked and brought into the conversations of the industry, because the result of AI models is not subject to certain rigorous controls as in the construction of the building.
It is something that we do not put a premium on, and more and more companies are going to have to do something about it. And the Oil and Gas sector, which uses AI intensively, will be increasingly affected by this problem.
At 7Puentes we work for more ethical projects
We have developed several complex AI projects for leading companies, and we have been studying and applying these issues for more than a decade, especially with regard to regulations and compliance standards in artificial intelligence.
Ethics for the benefit of project quality is a very important and crucial aspect for each of the people who work in our team.
If you are interested in adding these valuable components to your project, contact our specialists to ensure that your next projects do not neglect the ethical aspects of AI.
The future is today: You will need to address this problem much sooner than you think!