unethical behavior
AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use
Shaayesteh, Mayssam Tarighi, Esfahani, Sara Memarian, Mohit, Hossein
Emerging artificial intelligence (AI) technology has a pronounced impact on higher education, addressing existing challenges in educational settings such as larger school sizes and the scarcity of elite instructors. In all these areas, it has been noted th at AI has led to massive changes: some estimates suggest that at least 80 percent of workers will have the quantity and quality of at least some of their tasks influenced (for the better) by AI (Canagasuriam & Lukacik, 2024) . This means that, in educational contexts, psychological empowerment has been shown to mitigate the combined enullects of emotional exhaustion and depression, demonstrating that social relationships and leadership can bolster mental health in institutions (Schermuly & Meyer, 2016) . However, this is not to say that AI is without dangers; cybercriminals have also turned to AI to bolster their attacks, for example, in the form of spear phishing or malware installation, showcasing how AI can be abused as a tool to harm enterprises (Mirsky et al., 2023) . Psychological empowerment -- comprising meaning, competence, self - determination, and impact -- has strong enullects on person - environment interactions, which ultimately influence how individuals feel about and perform their jobs (Gregory et al., 2010) .
- Asia > Malaysia (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Alaska (0.04)
- (3 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.68)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Education > Educational Setting > Higher Education (0.66)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Collusion Rings Threaten the Integrity of Computer Science Research
The discipline of computer science has historically made effective use of peer-reviewed conference publications as an important mechanism for disseminating timely and impactful research results. Recent attempts to "game" the reviewing system could undermine this mechanism, damaging our ability to share research effectively. I want to alert the community to a growing problem that attacks the fundamental assumptions that the review process has depended upon. My hope is that exposing the behavior of a community of unethical individuals will encourage others to exert social pressure that will help bring colluders into line, invite a broader set of people to engage in problem solving, and provide some encouragement for people trapped into collusion by more senior researchers to extricate themselves and make common cause with the rest of the community. My motivation for writing this Viewpoint is because I became aware of an example in the computer-architecture community where a junior researcher may have taken his own life instead of continuing to engage in a possible collusion ring.a Collusion rings extend far beyond the field of computer architecture.
AI virtues -- The missing link in putting AI ethics into practice
Several seminal ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, widespread criticism has pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach and the many shortcomings associated with it. This paper proposes a different approach. It defines four basic AI virtues, namely justice, honesty, responsibility and care, all of which represent specific motivational settings that constitute the very precondition for ethical decision making in the AI field. Moreover, it defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues by helping with overcoming bounded ethicality or the many hidden psychological forces that impair ethical decision making and that are hitherto completely disregarded in AI ethics. Lastly, the paper describes measures for successfully cultivating the mentioned virtues in organizations dealing with AI research and development.
- North America > United States > New York (0.04)
- North America > United States > Virginia (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Truly Autonomous Machines Are Ethical
John Hooker Carnegie Mellon University Revised December 2018 Abstract While many see the prospect of autonomous machines as threatening, autonomy may be exactly what we want in a superintelligent machine. There is a sense of autonomy, deeply rooted in the ethical literature, in which an autonomous machine is necessarily an ethical one. Development of the theory underlying this idea not only reveals the advantages of autonomy, but it sheds light on a number of issues in the ethics of artificial intelligence. It helps us to understand what sort of obligations we owe to machines, and what obligations they owe to us. It clears up the issue of assigning responsibility to machines or their creators. More generally, a concept of autonomy that is adequate to both human and artificial intelligence can lead to a more adequate ethical theory for both. There is a good deal of trepidation at the prospect of autonomous machines. They may wreak havoc and even turn on their creators. We fear losing control of machines that have minds of their own, particularly if they are intelligent enough to outwit us. There is talk of a "singularity" in technological development, at which point machines will start designing themselves and create superintelligence (Vinge 1993, Bostrom 2014). Do we want such machines to be autonomous? There is a sense of autonomy, deeply rooted in the ethics literature, in which this may be exactly what we want. The attraction of an autonomous machine, in this sense, is that it is an ethical machine. The aim of this paper is to explain why this is so, and to show that the associated theory can shed light on a number of issues in the ethics of artificial intelligence (AI).
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- (2 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.46)
- Law (0.46)
Only the ethical need apply
The "great global brain drain" is how futurist Richard Samson describes it. As the century progresses, he predicts, more and more jobs will be sucked up by technology and sophisticated computers, forcing humans to hone skills machines can't duplicate - at least not yet. Qualities such as ethical judgment, compassion, intuition, responsibility, and creativity will be what stand out in an automated world. With ethics issues spiking into the news almost weekly, the idea of a work world in which individual ethical acumen is viewed as an essential job skill is far from outlandish. The signs are already here.
- North America > United States > Pennsylvania (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- North America > Canada (0.05)
- Asia > Singapore (0.05)