Starting on September 25, Valve will "review" community comments across Stream's game hubs. Reported discussion threads and individual posts will fall on the desk of the platform's content moderation team, who will remove anything that violates Steam's community guidelines. Developers that already take a DIY approach to moderation can opt-out of the new system. Valve assures devs that it won't be actively spying on community discussions or posting in threads, instead it says "we'll only be communicating with players if it's necessary when issuing a warning or ban for reported content." As noted in its blog post, Valve has long been reviewing flagged community content like screenshots, artwork, guides, user profiles, community groups, and user reviews -- but it's been "hesitant" to wade into individual discussions, so as not to meddle with the distinct style of each game hub.
On September 19, 2019, the European Parliament Research Service (EPRS) released a paper, European Union (EU) Guidelines on Ethics in Artificial Intelligence (AI): Context and Implementation (--Paper--), to shed light on the ethical rules that were established under the EU Guidelines on Ethics in AI (--Guidelines--). The Guidelines, which are nonbinding, were published in April 2019 after the European Parliament was directed to update and complement the existing Union legal framework with guiding ethical principles that are based on a --human-centric-- approach to AI. The Paper aims to provide guidance on the key ethical requirements that are recommended in the Guidelines when designing, developing, implementing or using AI products and services to promote trustworthy, ethical and robust AI systems. The Paper also identifies some implementation challenges and possible future EU action while also calling for certain actions including clarifying the Guidelines, fostering the adoption of ethical standards and adopting legally binding instruments to set common rules on transparency. Of note, the Guidelines highlight that all AI stakeholders must comply with the General Data Protection Regulation (GDPR) principles and advise the AI community to guarantee that privacy and personal data are protected, both when building and when running AI systems to afford citizens full control over their data.
Apple has started removing some apps from its App Store that are violating its policies, a move that signals the company's stricter implementation of the App Store Review Guidelines. It isn't clear how many apps are affected by the crackdown, but it seems Apple is serious in tracing all of the App Store apps that violate its guidelines. The Cupertino giant is sending a notice via email to all developers of the apps in question to inform them of their violations. Apple is also encouraging developers to make necessary changes before they resubmit their apps to the App Store. "Upon re-evaluation, we found that your app is not in compliance with the App Store Review Guidelines.
With driverless cars on the horizon, the government is adding its two cents on the matter. On Tuesday, the U.S. Transportation Department (DOT) released a four-part policy outlining safety measures to ensure all vehicles are ready to be on the road. The four sections of the policy--created in consultation with experts in the field, state governments, safety advocates and more--include a 15 point safety assessment, a Model State policy, the National Highway Traffic Safety Administration's (NHTSA) current regulatory tools and modern regulatory tools. According to the DOT, the goal of the policy is to provide car makers with a framework. "Automated vehicles have the potential to save thousands of lives, driving the single biggest leap in road safety that our country has ever taken," said U.S. Transportation Secretary Anthony Foxx in a statement.
As promised back at WWDC, Apple is now allowing developers to challenge App Store rules. Apps that are already on the App Store will no longer need to resolve guideline violations before Apple approves bug fixes -- unless those violations are related to legal issues. And Apple will allow developers to suggest changes to its guidelines. Apple announced that these changes are live and explained that while guideline violations won't hold up bug fixes, developers will need to address guideline violations in their next submissions. As you may remember, Apple came up with these changes shortly after a public battle with Basecamp over the "Hey" email app.