1. What are algorithms and their importance in today’s world?
An algorithm is a set of instructions or commands that must be followed in a specific order to solve a problem or perform a task. They are composed of instructions arranged in a sequence that must be completed systematically in order to obtain the desired result. Therefore, an algorithm is built based on the result intended by its programmer from a previously defined objective.
Algorithms are the basis for most computational activities and are increasingly significant in our daily lives. They are used in everything from routine daily operations to sophisticated computer programs and platforms which study users’ online behavior. For example, algorithms currently support the main functions of computers, cell phones, and tablets.
In addition, algorithms also play a crucial role in the development of artificial intelligence and machine learning software, allowing systems to learn and make decisions based on previously collected data and information. These technologies have been applied in various fields, such as medicine, finance, transportation, and entertainment, revolutionizing society’s interactions, whether in shopping, getting information, working, etc.
Therefore, it is important to understand the functioning and impact of algorithms in today’s world and the responsibility of professionals who develop them to ensure that these tools are used ethically and fairly. In other words, algorithms study and critical analysis are essential to take full advantage of their potential in a conscious and sustainable way.
1.1. Artificial Intelligence Algorithms
Artificial intelligence (“AI”) refers to the ability of machines to learn a function in a similar way to humans. It is about technology’s ability to discern facts, reach conclusions, solve problems, and adapt to new situations based on available information.
“Machine learning” is a subfield of AI which seeks to develop algorithms capable of learning and improving their performance through experience and data. This approach allows computers to create mathematical models for executing various autonomous tasks to achieve a specific goal: an algorithm that artificially learns how to perform a given task.
Different types of machine learning algorithms, such as supervised, unsupervised, and reinforcement learning, are selected according to the nature of the data and the problem to be solved.
1.2. Supervised, Unsupervised, and Reinforcement Algorithms
As mentioned earlier, machine learning can be categorized into three main types of algorithms: (i) supervised; (ii) unsupervised; and (iii) by reinforcement. Each of them has different characteristics and applications, which will be detailed below:
1.2.1. Supervised Algorithms
The supervised algorithm is a more basic type of artificial intelligence algorithm. Here, the system is fed with data previously selected and labeled by humans. Each output is assigned a label (numerical value or a class) so that the algorithm can predict the output label based on the input information. Then, from these data that contain the inputs and the corresponding expected results, the machine identifies patterns and learns from them, modifying its variables and mapping the inputs to the corresponding results.
An example of the application of supervised algorithms is the systems that financial institutions use to approve loans. The analysis carried out by such systems focuses on the customer’s credit history, and the data used to train the system has already been classified as favorable or unfavorable for the credit offer.
It should also be noted that supervised algorithms can be subdivided into classification algorithms and regression algorithms.
Classification algorithms are those in which the output can assume only a set of predefined labels, with the primary purpose of classifying items or samples, according to characteristics observed by the supervisor. Regression algorithms, on the other hand, are those in which the output assumes any real value; that is, they work by predicting values from the variables present.
1.2.2. Unsupervised Algorithms
The second type is the unsupervised algorithm, which can organize data without requiring prior labeling. In this case, there is no labeling on the data that feeds the system, so the algorithm is forced to deduce the structure from the inputs by itself.
These types of algorithms are divided into transformation algorithms and clustering algorithms. Transformation algorithms can recreate a representation of a given data set more conveniently than the original, making human interpretation easier or improving the performance of other learning algorithms. Clustering algorithms, on the other hand, partition data into groups with similar attributes based on pre-established criteria, which makes it possible to perceive certain patterns among the provided data.
Therefore, the unsupervised algorithm can find patterns in an unlabeled data set. This method is used to create agile decision-making systems and recognize and identify faces and voices, allowing the development of autonomous vehicles and drones, for example.
1.2.3. Reinforcement Learning Algorithms
The third category of algorithms is reinforcement learning. Unlike supervised and unsupervised algorithms, the reinforcement learning algorithm is based on the interaction between the algorithm and the environment. In other words, the reinforcement learning algorithm does not receive labeled data nor examples of correct solutions.
This methodology allows the algorithm to learn from attempt and error, adjusting its behavior to achieve the best possible result. A classic example of reinforcement learning is the development of algorithms capable of playing complex games, such as Chess or Go, where the objective is to maximize the score and win the game. Other applications include, for example, robotics, navigation, process optimization, and recommender systems.
2. Is it possible to control all algorithms?
2.1. Transparency differences between supervised, unsupervised, and reinforcement algorithms
Recent developments in computing, particularly in areas such as AI, have increased the degree of autonomy built into software and computer systems to ever greater degrees. Developers use supervised and unsupervised algorithmic methods to enable systems to make decisions in unforeseen situations.
With the increasing complexity and autonomy of software, the transparency of algorithms is increasingly important. The purpose of algorithmic transparency is to reveal the often hidden inner workings of the operating software of computer systems.
As mentioned earlier, algorithms can be divided into supervised and unsupervised. Supervised algorithms follow instructions predetermined by the programmer. Information is entered into the system, processed by the algorithm, and then the system produces the output (result). In this way, the active phases of the algorithm are entirely under the programmer’s control. In other words, programmable (supervised) algorithms are much more transparent.
On the other hand, unsupervised algorithms can be more difficult to control and understand. This situation arises because, as already seen, these algorithms are designed to find patterns and structures in unlabeled data without the need for explicit guidance from a programmer, which can make them less transparent since the programmer does not have full control over the outputs and internal processes of the algorithm.
Reinforcement learning algorithms have an intermediate level of control. Although these algorithms are designed to learn and make decisions through interactions with the environment, they still receive feedback in the form of rewards or punishments, which can influence the algorithm’s behavior.
Finally, it is worth noting that the transparency of algorithms is a topic of great importance, especially in decision-making by the government, financial, and health institutions. Therefore, algorithm developers must strive to make their systems more transparent, allowing for a clearer understanding of how decisions are made and reducing potential risks or negative consequences.
2.2. Situations where the output of the artificial intelligence algorithm did not match the programmer’s expectations.
Although artificial intelligence algorithms can perform tasks faster and more efficiently than humans, their outputs may not match the programmer’s expectations in some cases. A notorious example is the case of the robot Tay, created by Microsoft in 2016[2].
Tay was programmed to imitate an American teenager’s behavior and interact with Twitter users. However, within hours, the robot started sending insulting messages and reflecting Nazi and homophobic ideas. This situation happened because the mentioned robot, based on the interactions it had with other users and the information already structured by the development team, learned this behavioral pattern, and replicated it.
While Microsoft quickly discontinued the experiment and tweaked its AI, Tay’s case illustrates the challenges of controlling and predicting the outputs of artificial intelligence algorithms. Other examples include algorithms used in facial recognition systems that have had significant error rates, which can lead to discrimination and privacy issues.
3. Accountability Mechanisms
3.1. Levels of accountability x Rigid control of artificial intelligence algorithms
So that the functioning of the algorithm can be managed, the concern with its responsibility must exist from the beginning of the project. It is critical to realize that different machine learning methods have different levels of control. While some are quite opaque, others can be well structured to support the decisions made.
With technological resources, it is possible to find interesting explanation paths, such as algorithm audits, validation tests, and data transparency. Despite not guaranteeing that the result produced by the algorithm is fair, these tools demonstrate that there was no failure in the technique used or that the same decision-making policy was followed in different situations.
Furthermore, accountability in artificial intelligence can be encouraged through economic incentives such as awards and subsidies for organizations that implement ethical and responsible systems. Government regulations may also be needed to ensure the transparency and accountability of artificial intelligence systems in critical industries such as health and public safety.
Therefore, these technologies guarantee a certain degree of accountability even if some aspects of the algorithm’s work are kept private.
4. Relationship with autonomous vehicles
4.1. Artificial intelligence and autonomous vehicles
AI plays a key role in the development of autonomous vehicles. This type of vehicle uses sensors, algorithms, and machine learning systems to detect objects in the environment, predict traffic situations, and make decisions in real time. In addition, AI can help improve the efficiency of such vehicles, as well as reduce fuel consumption and pollutant emissions.
In this sense, one of the biggest challenges in the development of autonomous vehicles is to guarantee the security and reliability of AI systems so that the accountability mechanisms mentioned above play a crucial role.
This is because it is important to understand how the AI that controls the car will act in unforeseen circumstances. This aspect is what makes the creation of autonomous vehicles so challenging, as it forces the programmer to test and validate the AI’s response to a wide variety of conditions to guarantee that the vehicle can operate safely in all situations (e.g., complex traffic situations such as busy intersections and construction zones).
In addition, it is important to ensure the transparency of AI systems so that users can understand how the vehicle makes decisions and reacts to different situations. However, for technological or even intellectual property reasons, it is often difficult for an organization to be completely transparent regarding the algorithm used in the vehicle.
Despite the challenges, autonomous vehicles have the potential to revolutionize transportation and transform the way we move around cities, as they can help reduce traffic accidents, improve transport efficiency, and reduce transport costs for citizen users. In addition, autonomous vehicles can provide transportation services for people with reduced mobility, the elderly, and other vulnerable groups.
For all these reasons, autonomous vehicles are among the artificial intelligence applications that most arouse interest in society and have received greater attention.
4.2. Responsibility of people involved with autonomous vehicles
The development of autonomous vehicles raises complex legal questions about liability in civil cases. As AI algorithms are responsible for making decisions in traffic situations, it is difficult to determine who is legally responsible in case of an accident – including manufacturers, suppliers, users, and owners, among others.
In cases where the vehicle is fully autonomous, outside the manufacturer’s programming, and without human intervention, it is difficult to discuss negligence or omission under the Civil Code. However, in semi-autonomous systems, where the vehicle’s autonomous action is interspersed with human intervention, negligence or omission can be discussed if the driver does not act in clearly dangerous situations.
However, the autonomous decision taken by the automobile cannot be seen as a failure in the product from the point of view of the objective culpability of the manufacturer. This is because the risk inherent in independent judgments cannot be eliminated, and autonomous decisions can cause damage without this being foreseeable. Thus, it is impossible to ” legitimately predict ” that autonomous decisions will never cause harm.
Whatever the outcome of this legal quandary, there is no denying that we are faced with a rather peculiar situation: whereas in the past, the question of fault for an accident was limited to drivers and product defects, now a third “cognitive element” – artificial intelligence – is capable of making decisions with unpredictable consequences.
As a result, this situation raises the discussion about the legal personality of entities driven by cognitive systems based on artificial intelligence. In the face of so many uncertainties, the fact is that conventional theories still need to be adapted to new technologies to guarantee justice and equity in civil cases involving autonomous vehicles.