The ubiquitous availability of computing devices and the widespread use of the internet have generated a large amount of data continuously. Therefore, the amount of available information on any given topic is far beyond humans' processing capacity to properly process, causing what is known as information overload. To efficiently cope with large amounts of information and generate content with significant value to users, we require identifying, merging and summarising information. Data summaries can help gather related information and collect it into a shorter format that enables answering complicated questions, gaining new insight and discovering conceptual boundaries. This thesis focuses on three main challenges to alleviate information overload using novel summarisation techniques. It further intends to facilitate the analysis of documents to support personalised information extraction. This thesis separates the research issues into four areas, covering (i) feature engineering in document summarisation, (ii) traditional static and inflexible summaries, (iii) traditional generic summarisation approaches, and (iv) the need for reference summaries. We propose novel approaches to tackle these challenges, by: i)enabling automatic intelligent feature engineering, ii) enabling flexible and interactive summarisation, iii) utilising intelligent and personalised summarisation approaches. The experimental results prove the efficiency of the proposed approaches compared to other state-of-the-art models. We further propose solutions to the information overload problem in different domains through summarisation, covering network traffic data, health data and business process data.
Representatives from Google have told an Australian Parliamentary committee looking into foreign interference that the country has not been the target of coordinated influence campaigns. "We've not seen the sort of foreign coordinated foreign influence campaigns targeted at Australia that we have with other jurisdictions, including the United States," Google director of law enforcement and information security Richard Salgado said. "Some of the disinformation campaigns that originate outside Australia, even if not targeting Australia, may affect Australia as collateral ... but not as a target of the campaign. "We have found no instances of foreign coordinated influence campaigns targeting Australia." While acknowledging campaigns that reach Australia do exist, he reiterated they have not specifically targeted Australia. "Some of these campaigns are broad enough that the disinformation could be, sort of, divisive in any jurisdiction in which it is consumed, even if it's not targeting that jurisdiction," Salgado told the Select Committee on Foreign Interference Through Social Media. "Google services, YouTube in particular, which is where we have seen most of these kinds of campaigns run, isn't really very well designed for the purpose of targeting groups to create the division that some of the other platforms have suffered, so it isn't actually all that surprising that we haven't seen this on our services." Appearing alongside Salgado on Friday was Google Australia and New Zealand director of government affairs and public policy Lucinda Longcroft, who told the committee her organisation has been in close contact with the Australian government as it looks to prevent disinformation from emerging leading up the next federal election. Additionally, the pair said that Google undertakes a "constant tuning" of the artificial intelligence and machine learning tech used. It said it also constantly adjusts policies and strategies to avoid moments of surprise, where Google could find itself unable to handle a shift in attacker strategy or shift in volume of attack. Appearing earlier in the week before the Parliamentary Joint Committee on Corporations and Financial Services, Google VP of product membership and partnerships Diana Layfield said her company does not monetise data from Google Pay in Australia. "I suppose you could argue that there are non-transaction data aspects -- so people's personal profile information," she added. "If you sign up for an app, you have to have a Google account.
Alethea AI, a synthetic media company, is piloting âprivacy-preserving face skins,â or digital masks that counter facial recognition algorithms and help users preserve privacy on pre-recorded videos.Â The move comes as companies such as IBM, Microsoft, and Amazon announced they would suspend the sale of their facial recognition technology to law enforcement agencies.Â âThis is a new technique we developed inhouse that wraps a face with our AI algorithms,â said Alethea AI CEO Arif Khan. âAvatars are fun to play with and develop, but these âmasks/skinsâ are a different, more potent, animal to preserve privacy.â Related: Why CoinDesk Respects Pseudonymity: A Stand Against Doxxing See also: Human Rights Foundation Funds Bitcoin Privacy Tools Despite âCoin Mixingâ Legal Stigma The Los Angeles based startup launched in 2019 with a focus on creating avatars for content creators that the creators could license out for revenue. The idea comes as deepfakes, or manipulated media that can make someone appear as if they are doing or saying anything, becomes more accessible and widespread. According to a 2019 report from Deep Trace, a company which detects and monitors deepfakes, there were over 14,000 deepfakes online in 2019 and over 850 people were targeted by them. Alethea AI wants to let creators use their own synthetic media avatars for marketing purposes, in a sense trying to let people leverage deepfakes of themselves for money.Â Khan compares the proliferation of facial recognition data now to the Napster-style explosion in music piracy in the early 2000s. Companies, like Clearview AI, have already harvested large amounts of data from people for facial recognition algorithms, then resold this dataÂ to security services without consent, and with all the bias inherent in facial recognition algorithms, which are generally less accurate on women and people of color.Â Related: The Zcash Privacy Tech Underlying Ethereumâs Transition to Eth 2.0 Clearview AI, has marketed itself to law enforcement and scraped billions of images from websites like Facebook, Youtube, and Venmo. The company is currently being sued for doing so.Â Â âWe will get to a point where there needs to be an iTunes sort of layer, where your face and voice data somehow gets protected,â said Khan.Â One part of that is creators licensing out their likeness for a fee. Crypto entrepreneur Alex Masmej was the first such avatar, and for $99 you can hire the avatar to say 200 words of whatever you want, provided the real Masmej approves the text.Â We will get to a point whereâ¦ where your face and voice data somehow gets protected Alethea AI has also partnered with software firm Oasis Labs, so that all content generated for Alethea AIâs synthetic media marketplace will be verified using Oasis Labâs secure blockchain, akin to Twitterâs âverifiedâ blue check mark.Â âThere are a lot of Black Mirror scenarios when we think of deepfakes but if my personal approval is needed for my deepfakes and itâs then time-stamped on a public blockchain for anyone to verify the videos that I actually want to release, that provides a protection that deepfakes are currently lacking,â said Masmej.Â The privacy pilot takes this idea one step further, not only creating a deep fake license out, but preventing companies or anyone from grabbing your facial data from a recording.Â There are two parts to the privacy component. The first, currently being piloted, involves pre-recorded videos. Users upload a video, identify where and what face skin they would like superimposed on their own, and then Alethea AIâs algorithms map the key points on your own face, and wrap the mask around this key point map that is created. The video is then sent back to a client.Â See also: Fake News on Steroids: Deepfakes Are Coming â Are World Leaders Prepared? Alethea AI also wants to enable face masking during real time communications, such as over a Zoom call. But Khan says computing power doesnât quite allow that yet, though it should be possible in a year, he hopes.Â Alethea AI piloted one example of the tech with Crypto AI Profit, a blockchain and AI influencer, who used it during a Youtube video.Â Deepfakes, voice spoofing, and other tech enabled mimicry seem here to stay, but Khan is still optimistic that weâre not yet at the point of no return when it comes to protecting ourselves.Â âIâm hopeful that the individual is accorded some sort of framework in this entire emerging landscape,â said Khan. âItâs going to be a very interesting ride. I donât think the battle is fully decided, although existing systems are oriented towards preserving larger, more corporate input.â Related Stories JD.com Subsidiary Rolling Out Privacy Tech From Blockchain Firm ARPA From Australia to Norway, Contact Tracing Is Struggling to Meet Expectations