It's amazing how far we've come with the internet and IT in general. However, with technology come issues around cybersecurity. In response, Artificial intelligence (AI) is changing the face of cybersecurity. AI integrates the use of machine learning to identify characteristics of harmful software. This integration, when done well, has the promise to revolutionize the online market, especially in identity theft protection.
NASA's latest Mars rover is done with its testing and is ready to embark upon its first scientific mission. After landing on the planet in February, the Perseverance rover has been busy trying out its many instruments--converting atmospheric carbon dioxide into oxygen that would be needed for manned missions, flying a helicopter and taking photos. Now, it will begin its mission: looking for evidence of life. Over the coming months, it will use a variety of sophisticated instruments to scan the planet's Jezero Crater for places of interest, drill into rocks and soil, and collect specimens to be retrieved and brought to Earth by future spacecraft. The rover is packed with 23 cameras, sensors, a laser and a drill-equipped robotic arm.
Recently, I've been discussing Professor John Lennox's book entitled 2084, which is all about the development and production of artificial intelligence. As an Atheist, I clearly have many differences with his Christian perspective. Wherever you sit with regard to the God question, Christianity, or the ethical concerns that are raised with the advancement of AI, you have to give varying perspectives their due. Today, I wanted to spend a moment chatting about how artificial intelligence is impacting the advertising world and the serious ethical questions that are raised by that. So let's begin with a couple of points from that book Professor Lennox wrote.
Newly appointed Minister for Industry, Science and Technology, Christian Porter. The federal government has unveiled its first action plan dedicated to establishing Australia as a global leader in developing and adopting responsible artificial intelligence (AI). Industry, Science and Technology Minister Christian Porter said the benefits of AI include protecting the environment, improving health outcomes, promoting smart cities, and boosting the economy. "AI could contribute more than $20 trillion to the global economy by 2030, and the AI Action Plan will help us leverage opportunities for AI to further strengthen the economy and improve the quality of life of all Australians, while ensuring that the development and adoption of AI is guided by appropriate safeguards, privacy and ethical considerations," he said. The government allocated $124.1 million in funding through the May budget to deliver some of the plan's key measures.
OECD.AI is an inclusive hub for public policy on AI that helps countries encourage, nurture and monitor the development and use of trustworthy AI. From the measurement of AI trends and developments to the direction and impact of national and regional AI policies and initiatives, OECD.AI is a prime example of how to move the AI discussion from principles to practice. Its up-to-date repository of over 600 AI policy initiatives from 60 countries enables the comparison of key elements of national AI policies in an interactive manner. Its work and indicators have informed and enhanced national and international analysis such as Pan Canadian AI Strategy Impact Assessment, the German AI Observatory, the G20 background paper on Trustworthy AI in Health multiple G20 reports and the recent EC Proposal for AI Regulation. Armando Guio, CAF Consultant at the Presidency of the Republic of Colombia believes that "the Observatory has rapidly become one of the most important sources of data and knowledge for AI governance."
Organizations around the globe are becoming more aware of the risks artificial intelligence (AI) may pose, including bias and potential job loss due to automation. At the same time, AI is providing many tangible benefits for organizations and society. For organization, this is creating a fine line between the potential harm AI might cause and the costs of not adopting the technology. Three emerging practices can help organizations navigate the complex world of moral dilemmas created by autonomous and intelligent systems. AI risks continue to grow, but so does the number of public and private organizations that are releasing ethical principles to guide the development and use of AI.
Outgoing Secretary-General of the Organisation for Economic Co-operation and Development (OECD) ... [ ] Angel Gurria applauds as new Secretary-General of the Organisation for Economic Cooperation and Development (OECD) Mathias Cormann, of Australia, takes over at the OECD headquarters in Paris, Tuesday, June, 1 2021. A recent study from the Pew Research Center showed that 53% of people in 20 countries feel that artificial intelligence has been a good thing for society. While over half the world's population has a positive view of AI, this means that one in every three people in these countries are concerned about the impacts AI can have on society. How do we ensure that AI is trustworthy and its benefits are shared by all? As the statistics show, while there is incremental improvement, there is still a level of hesitancy and suspicion towards AI among the citizens around the world.
One of Rembrandt's finest works, Militia Company of District II under the Command of Captain Frans Banninck Cocq (better known as The Night Watch) from 1642, is a prime representation of Dutch Golden Age painting. But the painting was greatly disfigured after the artist's death, when it was moved from its original location at the Arquebusiers Guild Hall to Amsterdam's City Hall in 1715. City officials wanted to place it in a gallery between two doors, but the painting was too big to fit. Instead of finding another location, they cut large panels from the sides as well as some sections from the top and bottom. The fragments were lost after removal.
Like any technology, AI has just as much potential for harm as for good. Some experts predict that once the excitement and novelty of AI-assisted clinical procedures wear off, problems will begin to pop up. For example, few of the 130 AI devices the U.S. Food and Drug Administration (FDA) has approved over the past couple of years have been tested in clinical trials. As a result, AI could miss a tumor during a CT scan, recommend the wrong medication, give a hospital bed to a patient who needs it less than another and produce many other errors. And if there is a fundamental flaw in the programming, it could misdiagnose thousands of patients instead of just one.
Dr. Dhonam Pemba is the CEO and Co-Founder of KidX, he is a neural engineer by education, a former rocket scientist by work, and AI entrepeneur by entrepeneurship. He received his Biomedical Engineering undergraduate degree from Johns Hopkins University, and hi PhD from the University of California, Irvine also in BME, but worked on neural interface for his thesis. Can you me about the NASA JPL project and how it was related to your PhD work? My PhD work was building micro implantable neural implants. Very similar to the work that Elon Musks's company Neuralink is now doing.