But privacy remains an unresolved challenge in the industry, particularly with regard to compliance and regulation https://lnkd.in/d7vE5cs. We at Centre Of Excellence For Protection Of Human Rights In Cyberspace (CEPHRC) and TeleLaw Project Of PTLB can help in formulating a #technolegal policy for AI and related fields for interested stakeholders. Collaborate with us in 2020 and let us together create a wonderful world.
There are many examples of machine learning algorithms later found to be unfair, including Amazon and its recruiting and Google's image labeling. Researchers have been aware of these problems and have worked to impose restrictions that ensure fairness from the outset. For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes. Another, called DP (demographic parity), ensures that groups are proportionally fair. In other words, the proportion of the group receiving a positive outcome is equal or fair across both the discriminating and nondiscriminating groups.
Cyber security is a very complicated field that requires domain specific expertise. We have many good cyber security experts in India and other jurisdictions who are incessantly fighting against sophisticated and novel cyber attacks on daily basis. There are many good cyber security products and services too that can prove really handy in times of crisis. However, law and legal policies are always behind the technology in general and cyber law and cyber security in particular. These fields need yearly review and modification that are not done even for decades.
Artificial intelligence technology is advancing and bringing opportunities for society but also profound challenges for individual freedom. AI is a powerful enabler of surveillance technology, such as facial recognition, and many countries are grappling with appropriate rules for use, weighing the security benefits against privacy risks. Authoritarian regimes, however, lack strong institutional mechanisms to protect individual privacy--a free and independent press, civil society, an independent judiciary--and the result is the widespread use of AI for surveillance and repression. This dynamic is most acute in China, where the Chinese government is pioneering new uses of AI to monitor and control its population. China has already begun to export this technology along with laws and norms for illiberal uses to other nations.