AI ethical dilemmas | computer world

ethical dilemmas of AI | computer world

Can we trust AI? It is already visibly and invisibly woven into our world, from Google Translate and video game robots to industrial use in healthcare, manufacturing, and banking for critical workloads. Can we effectively harness this revolutionary technology and escape the inherent ethical dilemmas of bias, trust, and transparency? There is a way forward, but it will require ongoing and diligent conversations about the ethics of AI as the technology continues to evolve.

Do you trust AI?

The issue of trust often arises in situations where human activity is entrusted to AI. For example, Tesla and other automakers are experimenting with automated AI-managed driving and parking capabilities. Autonomous driving pushes the limits of trust, as tests of this technology have led to human deaths. Lines are quickly drawn in the sand between whether or not we can trust a car for parking.

However, trusting AI is already inherent in some use cases: people do it without even realizing it. In case of fraud detection, text messages about suspicious transactions are sent even before you realize your wallet or credit card is missing. This type of technology happens in real time and can save you a huge financial headache.

Even in industries where AI is part of business-critical workloads, the issue of trust remains relevant. For example, in mainframe technology, some companies do not automatically take action when the AI ​​discovers an anomaly. Although the AI ​​did its job detecting the anomaly, it doesn't understand that stopping work on a Mainframe could have catastrophic consequences for a business. In this case, operators don't trust the AI ​​to make a better judgment than they might. As AI evolves, businesses and use cases will test when, where, and how much to trust this technology; ultimately, they will look at whether the data and/or results are achievable and unbiased.

bias in, bias out

Like humans, AI systems are often expected to follow social norms and be fair and impartial. When it comes to bias, the problem isn't unique to AI models — humans struggle with bias, too. However, with AI, the potential bias results can have a staggering impact. In AI, bias has a strong correlation with input data. For example, impure, unrefined, or erroneous input data will affect your output. The important thing to understand with a bias is that it takes sensitivity, insight, and openness to navigate ethically.

Humans ultimately control bias in AI: practitioners select the original input data and introduce bias to influence the results. Take Amazon, for example. Amazon receives a large number of requests. When they decided to test the application of AI to their hiring process, the company used the CVs of current employees as input data. So what was the result? Amazon widely shared that by using select demographic sampling, the results were skewed against women. During the testing process, the retailer discovered that if the word "woman" appeared anywhere on a resume, that person never received a call. Amazon realized that input data was part of the problem and never implemented the model for hiring managers.

Sharing this information and being sensitive to the results is essential as we continue to discover the best use of this technology. Since bias is strongly tied to intent, Amazon is not an example of malicious use of AI. Instead, it demonstrates the need for introspection when using AI. Amazon corrected the results for model bias to help them get a more balanced result.

AI has already very quickly become a critical part of the business, even a major differentiator for some organizations, and ethical issues such as bias are to be expected. The keys to overcoming bias are making sure your input data is as clean as possible and being willing to investigate unethical results openly and transparently.

The role of transparency

Transparency in AI can be defined as explainable to employees and customers. The problem is that the AI ​​isn't inherently transparent, so it will be an essential part of navigation as the AI ​​gets refined. When applying transparency at the corporate level, the question is how to establish rules that are of general application when there are different degrees of impact? How will we know if the AI ​​used with a less favorable result is transparent?

The lack of transparency is particularly important for consumers. Consumers want to know what personal data companies collect, how they use it, how their algorithms work, and who is responsible if something goes wrong. In some cases, like Spotify, the algorithm is tied to the organization's competitive advantage. Spotify's value to the consumer lies in the recommendations it makes based on the information collected about the listener. The question, then, is, where is the ethical line of transparency? How much should a company share and how much should consumers see and know?

Transparency is a moving target; however, it is crucial to assess the impact as algorithms change. When a change occurs, being transparent about that change and its impact on various stakeholders will be key to helping technology advance to an even more innovative place. One possible solution lies in a balance. Organizations that are willing to explain why certain decisions were made in their models can make a positive contribution to transparency without revealing sensitive information.

Is an ethical balance possible?

The short answer is yes, an ethical balance is possible. However, understanding how to navigate the ethics of AI technology is an ongoing process. Some in our industry call for transparency, while companies find it necessary to protect their technology because it is a differentiator. Both sides make significant and valid points, but where does that leave the inherent ethical dilemma?

There are a few key factors regardless of which side of the conversation you're on.

Ongoing human input will be an important aspect of AI technology, ethically and functionally. Now and as AI evolves in the future, it will require human input and oversight.

Sensitivity to bias is essential for input data, model fits, and tracking results.

Transparency about AI missteps and successes encourages conversations about ethics and helps advance AI technology.

As AI continues to influence the global community, we must remain committed to sharing lessons and asking ethical questions about trust, bias, and transparency. The more we do, the better decisions we'll make, and the easier it will be to understand how to improve AI in the future.

Want to learn more about the role of AI at Broadcom? Find out how we use AI technology to augment IT operations.

Copyright © 2022 IDG Communications, Inc.