Computer Science/Artificial Intelligence

7. Philosophy, Ethics, and Safety of AI

  • -
728x90
반응형

Strong vs Weak AI

The Strong AI hypothesis is the philosophical position that a computer program that causes a machine to behave exactly like a human being would also give the machine subjective conscious experience and a mind

On the other hand, Weak AI is the philosophical position that an AI that appears to behave exactly like a human being is only a simulation of human cognitive function, and is not conscious in any sense of the word.

💡
Strong AI는 기계가 진정으로 이해하고 생각할 수 있다고 보며, 반면에 Weak AI는 기계가 단순히 입력된 명령어를 따르거나 패턴을 학습하여 작업을 수행한다고 보는 것이다.

Therac-25

It is a Radiation therapy machine. At least 6 patients where given 100 times the intended dose of radiation

Ths causes are complex, but at least one cause was identified : Inadequate Software Engineering Practices

The software should be subject to extensive testing and formal analysis at the module and software level; system testing is not adequate. Regression testing should be performed on all software chages.

Why should we care about ethics?

Since the chances of causing harm are even greater when we start using AI

There is lots of disagreement between philosophers about ethics. After all it is philosophy.

Looking at what philosophers say about ethics is important because it helps us to frame ethical questions in AI and Engineering.

Six Ethical Perspectives

  1. Rights or Deontological ethics
  1. Good/Harms or Teleological ethics
  1. Virtue based ethics or Aretaic ethics
  1. Communitarian ethics
  1. Communicative ethics
  1. Flourishing ethics

Deontological ethics

Deontological ethics, focusing on universal rights, moral duties, prescriptions, and obligations as rational moral agents

Assessments are based on intrinsic, generally applied acts and laws that meet the criteria of Kant’s categorical imperative. As such, this view concerns fundamental human rights, upholding personhood, and ensuring all treat others equally with fairness and respect

💡
결과보다는 행동 자체를 평가하는데 중점을 두는 윤리학의 분야이다. 도덕적으로 옳음이 고정된 규범 혹은 의무와 관련되어있다고 보는 것이다.

Teleological ethics

Teleological ethics, focusing on the goods that ought to be pursued, often considering the harmful or beneficial consequences to individuals or society.

Utility theory (공리주의) is incorporated in this viewpoint

💡
teleological ethics는 결과에 중점을 두는 윤리학의 분야이다. 도덕적으로 옳음이 최선의 결과나 최대한 많은 사람에게 이익을 가져다주는 것과 관련되어있다고 보는 것이다.

Aretaic ethics

Aretaic ethics, focusing on virtue of the actors, their character and integrity; beneficence.

💡
Aretaic ethics에서는 어떤 행동이 옳거나 그른 것보다는 어떤 사람이 좋은 사람인지를 중요하게 생각하는 것이다.

Core Functions

Core functions are design goal for ethical AI using the ethical framework

  1. Identify ethical Issues of AI
  1. Improve human awareness of AI
  1. Engage in dialogical collaboration with AI
  1. Ensure the accountability of AI
  1. Maintain the integrity of AI

Identify ethical Issues of AI

This function encourages all parties to recognize the role of AI systems in human technology interactions and to further acknowledge that ethical concerns around fairness, transparency, equity, goodness, beneficence, social utility, human flourishing and happiness, and protections for human agency exist.

This function assists in identifying that privacy protections and security in systems are grounded in the ethical principles of rights to self determination and happiness

Improve human awareness of AI

This function addresses human understanding and cognizance of how AI systems work within the devices used, and how industry is creating algorithms from collected data and using, storing, protecting, and responding to threats and breaches or invasion.

For example, are there aspects of informed consent? Are there just in time notifications?

💡
Informed consent란 개인정보를 제공하기 전에 해당 정보가 어떤식으로 사용될지 충분히 이해한 후에 동의하는 것을 의미한다.
💡
Just in time notification은 특정 기능이 활성화되거나 개인정보가 수집될 때 사용자에게 즉시 알려주는 것이다. 이를 통해 사용자는 AI system이 현재 무엇을 하고 있는지 파악할 수 있게 도니다.

Ensure the accountability of AI

This function addresses the adherence of ethical conduct of AI systems and those who design them

The moral imperatives of the ACM code adapted for this framework include the initiative to : contribute to society and human well-being, avoid harm to others, be honest and trustworthy, be fair and non-discriminative, honor human and property rights, respect privacy, honor confidentiality, and evaluate and improve on an ongoing basis.

💡
즉, AI system과 그것을 설계하는 사람들의 윤리적 행동준수를 다루는 것이다.
반응형
Contents

포스팅 주소를 복사했습니다

이 글이 도움이 되었다면 공감 부탁드립니다.