Collaborating Authors


Advantages and Disadvantages of Artificial Intelligence: How To Use AI in Your Business


Artificial intelligence (AI) has virtually unlimited applications that are part of our everyday life. It offers countless solutions across all industries. Artificial intelligence is a major market player in the business world. AI plays a key role in data analysis, marketing, finance, business, advertising, medicine, technology, science and engineering where machines are learning from stimuli and reacting in ways more human than ever before. Artificial intelligence has several advantages and disadvantages, so it's important to know how to use it to maximize its potential within your organization.

China's new proposed law could strangle the development of AI


The proposed law mandates that companies must use algorithms to "actively spread positive energy." Under the proposal, companies must submit their algorithms to the government for approval or risk being fined and having their service terminated. This is an incredibly bad and even dangerous idea. It's what happens when people who don't understand AI try to regulate AI. Instead of fostering innovation, governments are looking at AI through their unique lenses of fear and trying to reduce the harm they worry about most. Thus, western regulators focus on fears such as violation of privacy, while Chinese regulators are perfectly okay with collecting private data on their citizens but are concerned about AI's ability to influence people in ways deemed undesirable by the government.

The U.N. Warns That AI Can Pose A Threat To Human Rights

NPR Technology

The United Nations High Commissioner for Human Rights Michelle Bachelet speaks at a climate event in Madrid in 2019. A recent report of hers warns of the threats that AI can pose to human rights. The United Nations High Commissioner for Human Rights Michelle Bachelet speaks at a climate event in Madrid in 2019. A recent report of hers warns of the threats that AI can pose to human rights. The United Nations' human rights chief has called on member states to put a moratorium on the sale and use of artificial intelligence systems until the "negative, even catastrophic" risks they pose can be addressed. The remarks by U.N. High Commissioner for Human Rights Michelle Bachelet were in reference to a new report on the subject released in Geneva.

DCGAN from Scratch with Tensorflow Keras -- Create Fake Images from CELEB-A Dataset


Generator: the generator generates new data instances that are "similar" to the training data, in our case celebA images. Generator takes random latent vector and outputs a "fake" image of the same size as our reshaped celebA image. Discriminator: the discriminator evaluate the authenticity of provided images; it classifies the images from the generator and the original image. Discriminator takes true of fake images and outputs the probability estimate ranging between 0 and 1. Here, D refers to the discriminator network, while G obviously refers to the generator.

The best video game story of the year is about 'Lost Judgment's' middle-aged, skateboarding attorney

Washington Post - Technology News

One more important note: "Lost Judgment" also breaks down walls of the previous RGG Studio titles by highlighting a variety of characters outside of the seedy underbelly of Japan. Yes, gang members and sex workers still populate the story, but the Judgment cast is largely made up of public servants, particularly Saori Shirosaki, Yagami's defense attorney colleague. Saori owns several moments in the game. While the Yakuza series was born to attract an audience of Japanese men, RGG Studio games would do well to highlight its women. And yes, they should all fight too.

UN calls for moratorium of AI that threatens human rights


The United Nations High Commissioner for Human Rights Michelle Bachelet on Wednesday called for a moratorium on the sale and use of artificial intelligence (AI) systems that threaten human rights until adequate safeguards are in place to ensure the technology will not be abused. "We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact," Bachelet said in a press release. The UN human rights office released a report on Wednesday warning of the risks of AI technologies, and emphasising that while AI can serve as a force for good, it can also cause catastrophic effects if used irresponsibly. "The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society," the report states. Bachelet, who is the UN's human rights chief, stressed that AI applications that do not comply with international human rights law must be banned.

Relationship between Trust and Law is counterintuitive and paradox


The European Commission's AI regulation proposal is a proposal for a regulation of the European parliament and of the Council laying down harmonized rules on artificial intelligence, Artificial Intelligence Act and amending certain union legislative acts (published in April 2021). Its explanatory memorandum explicitly aims to implement, among others, an ecosystem of trust by proposing a legal framework for trustworthy AI and the word trust is mentioned several times (14 trust, 1 trusted, 2 trustful, 21 trustworthy, 3 trustworthiness, 6 entrusted, 1 entrusting). This is somewhat surprising from a Swiss legal point of view. Indeed, under Swiss law, trust (German: Vertrauen / Italian: Fiducia / French: Confiance) is never mentioned, for example, in the Swiss Civil Code, in the Code of Obligations, nor in the Federal Product Liability Act, which constitute fundamental legal bases. However, we start seeing this trend also in Switzerland: The second key objective of the Digital Switzerland Strategy is guaranteeing security, trust and transparency.

Artificial Intelligence and how the courts approach the legal implications


Artificial intelligence (AI) and automation are continually changing the way we do business. Organisations across all industries and sectors are deploying machine learning and NLP (natural language processing) technologies to automate processes in almost every part of their operation. For businesses, AI means improving efficiencies, amplifying productivity and reducing cost. But while there are many advantages, AI also presents a wide range of legal challenges – especially in areas such as regulatory compliance, liability, risk, privacy and ethics. To compound matters, regulation of AI is slow to develop, leaving businesses with no choice but to navigate the unknown.

The responsibilities of AI-first investors – TechCrunch


Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies -- such as image labeling -- receive large (undisclosed) portions of their revenue from the defense industry. Investors in AI-first technology companies that aren't even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties. Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.

Queensland police to trial AI tool designed to predict and prevent domestic violence incidents


Queensland police are preparing to begin trials of an artificial intelligence system to identify high-risk domestic violence offenders, and officers intend to use the data to "knock on doors" before serious escalation. The "actuarial tool" uses data from the police Qprime computer system to develop a risk assessment of all potential domestic and family violence offenders. The algorithm has been in development for about three years and practical trials will begin in some police districts before the end of 2021. "With these perpetrators, we will not wait for a triple-zero phone call and for a domestic and family violence incident to reach the point of crisis," acting Supt Ben Martain said. "Rather, with this cohort of perpetrators, who our predictive analytical tools tell us are most likely to escalate into further DFV offending, we are proactively knocking on doors without any call for service."