cfaa
The Government Finally Figured Out What Hackers Are the Good Guys
Last week, the Justice Department announced a newly revised policy for when prosecutors should charge people under the Computer Fraud and Abuse Act, the decades-old, controversial anti-hacking law. Many of the fights around the CFAA have hinged on what is--and is not--illegal hacking: If a mother violates a website's terms of service by creating a social media profile with a photo of someone else and a fake name, for instance, does that qualify? Or if a police officer searches a government license plate database for personal reasons, instead of work reasons, is that hacking? What about if a Major League Baseball team guesses a former employee's password and uses it to download information about his new team? Or a college student tries to find bugs in a voting app as part of an election security course?
- Law (1.00)
- Leisure & Entertainment > Sports > Baseball (0.89)
- Government > Regional Government > North America Government > United States Government (0.77)
- (2 more...)
Collaborative Filtering with Attribution Alignment for Review-based Non-overlapped Cross Domain Recommendation
Liu, Weiming, Zheng, Xiaolin, Hu, Mengling, Chen, Chaochao
Cross-Domain Recommendation (CDR) has been popularly studied to utilize different domain knowledge to solve the data sparsity and cold-start problem in recommender systems. In this paper, we focus on the Review-based Non-overlapped Recommendation (RNCDR) problem. The problem is commonly-existed and challenging due to two main aspects, i.e, there are only positive user-item ratings on the target domain and there is no overlapped user across different domains. Most previous CDR approaches cannot solve the RNCDR problem well, since (1) they cannot effectively combine review with other information (e.g., ID or ratings) to obtain expressive user or item embedding, (2) they cannot reduce the domain discrepancy on users and items. To fill this gap, we propose Collaborative Filtering with Attribution Alignment model (CFAA), a cross-domain recommendation framework for the RNCDR problem. CFAA includes two main modules, i.e., rating prediction module and embedding attribution alignment module. The former aims to jointly mine review, one-hot ID, and multi-hot historical ratings to generate expressive user and item embeddings. The later includes vertical attribution alignment and horizontal attribution alignment, tending to reduce the discrepancy based on multiple perspectives. Our empirical study on Douban and Amazon datasets demonstrates that CFAA significantly outperforms the state-of-the-art models under the RNCDR setting.
- North America > United States (0.14)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.05)
- Asia > China (0.04)
- (2 more...)
Adversarial Machine Learning and the CFAA - Schneier on Security
As I've noted in the past ICT related legislation tends to be considerably over broad in scope ay the best of times, and prosecuters have tried very hard to open it up further with case law. Whilst some judges do pull things in a bit, to many alow prosecutorial over reach go to far. A rule of thumb for legislation should be to reset any proposed legislation from ICT and see what equivalent legislation exists for non ICT situations. Thus any ICT legislation should be similarly restrained in scope. After all it is not illegal to walk up to somebodies door and knock politely, if you've made a nusance of yourself there are civil remidies. However ICT legislation makes the equivalent online activity actually a criminal activity from the get go, and it's frequently treated as something worse than armed robbery.
- Law > Statutes (1.00)
- Government (1.00)
Law and Adversarial Machine Learning
Kumar, Ram Shankar Siva, O'Brien, David R., Albert, Kendra, Vilojen, Salome
When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond? Through scenarios grounded in adversarial ML literature, we explore how some aspects of computer crime, copyright, and tort law interface with perturbation, poisoning, model stealing and model inversion attacks to show how some attacks are more likely to result in liability than others. We end with a call for action to ML researchers to invest in transparent benchmarks of attacks and defenses; architect ML systems with forensics in mind and finally, think more about adversarial machine learning in the context of civil liberties. The paper is targeted towards ML researchers who have no legal background.
- Europe (0.28)
- North America > United States > New York > New York County > New York City (0.05)
A legal question for the AI age: Is tricking a robot the same thing as hacking it?
A team of computer scientists and a lawyer at University of Washington are raising a curious question: Do current US laws cover cutting-edge research that allows people to bend AI to their will? The research, called adversarial machine learning, takes advantage of the way AI looks at the world, tricking the algorithm to make a different decision than it was designed to make. For example, an attacker might trick AI into perceiving a stop sign as a speed limit sign, or poison an automated credit-rating system in order to get a cheaper loan. The issue could affect every tech company using AI today: If this kind of intervention constitutes hacking, are companies now legally required to protect their systems from adversarial machine learning as they do typical hacking? And if this is not hacking under the legal definition, who's responsible if an attacker crashes someone else's car by tricking its AI?
- Law (0.74)
- Transportation > Ground > Road (0.40)
- Government > Regional Government > North America Government > United States Government (0.37)
I, For One, Welcome Our Forthcoming New robots.txt Overlords
Despite my week-long Twitter consumption sabbatical (helped -- in part -- by the nigh week-long internet and power outage here in Maine), I still catch useful snippets from folks. My cow-orker @dabdine shunted a tweet by @terrencehart into a Slack channel this morning, and said tweet contained a link to this little gem. Said gem is the text of a very recent ruling from a District Court in Texas and deals with a favourite subject of mine: robots.txt. The background of the case is that there were two parties who both ran websites for oil and gas professionals that include job postings. One party filed a lawsuit against the other asserting that the they hacked into their system and accessed and used various information in violation of the Computer Fraud and Abuse Act (CFAA), the Stored Wire and Electronic Communications and Transactional Records Access Act (SWECTRA), the Racketeer Influenced and Corrupt Organizations Act (RICO), the Texas Harmful Access by Computer Act (THACA), the Texas Theft Liability Act (TTLA), and the Texas Uniform Trade Secrets Act (TUTS).
- North America > United States > Texas (0.90)
- North America > United States > Maine (0.25)
- North America > United States > Pennsylvania (0.05)
- Law > Litigation (1.00)
- Government > Regional Government > North America Government > United States Government (0.36)
- Information Technology > Communications (0.90)
- Information Technology > Artificial Intelligence > Robots (0.72)
Mr. Robot Killed the Hollywood Hacker
For decades Hollywood has treated computers as magic boxes from which endless plot points could be conjured, in denial of all common sense. TV and movies depicted data centers accessible only through undersea intake valves, cryptography that can be cracked through a universal key, and e-mails whose text arrives one letter at a time, all in caps. "Hollywood hacker bullshit," as a character named Romero says in an early episode of Mr. Robot, now in its second season on the USA Network. "I've been in this game 27 years. Not once have I come across an animated singing virus."
- Asia > Middle East > Republic of Türkiye > Adana Province > Adana (0.07)
- North America > United States > Michigan (0.05)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Media (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (0.99)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Robots (0.67)
- Information Technology > Communications > Networks (0.48)
ACLU Files Challenge To CFAA Over Blocking Research Into Discrimination Online Techdirt
There's been a lot of talk lately about the possibility of discrimination being built into the algorithms that determine our lives. In the past year, multiple publications have discussed what happens when algorithms are racist in a time when algorithms decide more and more of our lives. Just recently, we talked about judges using proprietary algorithms in sentencing, and how those algorithms themselves may judge people based on things like skin color. And just a few days ago, there was a fascinating NY Times article about inherent bias in artificial intelligence systems. I even went to a conference recently, where there was a whole discussion on the question of what do you do "if your algorithm is racist."