Well File:

Pattern Recognition


Finally! Windows 11's Snipping Tool will let you copy text from screenshots

PCWorld

Well now this will be useful! Microsoft is adding a text recognition function (OCR) to the Windows 11 Snipping Tool. The new feature will let you to copy text from screenshots and paste it into word processing programs, for example. Currently, only Windows Insider testers from the Canary and Dev channels can try the new text copying feature in the Snipping Tool, though if all goes well you can expect to see it hit all Windows 11 machines at some point in the future. The new function, called "Text Actions," is available in Snipping Tool version 11.2308.33.0.


How Google, UCLA are prompting AI to choose the next action for a better answer

ZDNet

Google's AVIS program can dynamically select a series of steps to undertake, such as identifying an object in a picture, then looking up information about that object. Artificial intelligence programs have dazzled the public with how they produce an answer no matter what the query. However, the quality of the answer often falls short because programs such as ChatGPT merely respond to text input, with no particular grounding in the subject matter, and can produce outright falsehoods as a result. A recent research project from the University of California and Google instead enables large language models such as Chat-GPT to select a specific tool -- be it Web search or optical character recognition -- that can then seek an answer in multiple steps from an alternate source. The result is a primitive form of "planning" and "reason," a way for a program to determine at each moment how a question should be approached, and once addressed, whether the solution was satisfactory.


How to Use Google Bard to Find Images Faster

WIRED

AI tools are here to stay, helping us search the web or decide what to wear, improve visual effects in movies, land a better job, and more. As time goes on, these tools will of course get smarter and bolt on more functions--such as being able to scour the web for images. That's a feature that just got added to the ChatGPT rival Google Bard. You can ask for pictures directly, as you might already do in a standard Google web search, and you can also get pictures in line with your text. In its updates log, Google says that images can "bring concepts to life, make recommendations more persuasive and enhance responses when you ask for visual information."


Google adds more context and AI-generated photos to image search

Engadget

Google is adding some new features to its image search function to make it easier to spot altered content, the company announced at its I/O 2023 keynote Wednesday. Photos shown in search results will soon include an "about this image" option that tells users when the image and ones like it were first indexed by Google. You can also learn where it may have appeared first and see other places where the image has been posted online. That information could help users figure out whether something they're seeing was generated by AI, according to Google. For example, you'll be able to see if the image has been on fact-checking websites that point out whether an image is real or altered.


Reviews: Dense Associative Memory for Pattern Recognition

Neural Information Processing Systems

The theoretical contribution presented in 291--310 is a welcome insight on the computational power of ReLUs. The experimental results for rectified polynomial units reported in figures 2 and 3 are interesting and apparently novel, even in the context of standard feedforward multi-layer networks. Being 291--297 a central point of the paper it should be expanded and better justified. Furthermore, the simple capacity analysis developed in p. 3 for the polynomial energy function is invoked here for the rectified polynomial energy function. This has to be justified. The paper starts from and mostly focuses on the associative memory (Hamiltonian) formulation, but then the findings are restricted to one-step retrieval.


Unsupervised Learning of Spoken Language with Visual Context

Neural Information Processing Systems

Humans learn to speak before they can read or write, so why can't computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.


Reviews: Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data

Neural Information Processing Systems

Method and Novelty: The authors present a model that has a number of strengths. First, the character-level model is trained on synthetically generated images from a font library, independently of the training corpus. Second, the model converts each training image into a factor graph and learns the spatial relationships between landmarks in each character. This model can readily assign a probability to each candidate character for an image, and the authors provide a description of a two-stage inference algorithm that consists of approximate belief propagation followed by refinement via a backtracking procedure. The candidate characters are then supplied to a word model, which is a fairly standard structured prediction using bigram and trigram features.


Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data

Neural Information Processing Systems

Abstract: We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods. In addition to transcribing text from challenging images, our method performs fine-grained instance segmentation of characters. We show that our model is more robust to both affine transformations and non-affine deformations compared to previous approaches.


Few-Shot Adversarial Domain Adaptation

Neural Information Processing Systems

This work provides a framework for addressing the problem of supervised domain adaptation with deep models. The main idea is to exploit adversarial learning to learn an embedded subspace that simultaneously maximizes the confusion between two domains while semantically aligning their embedding. The supervised setting becomes attractive especially when there are only a few target data samples that need to be labeled. In this few-shot learning scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by carefully designing a training scheme whereby the typical binary adversarial discriminator is augmented to distinguish between four different classes, it is possible to effectively address the supervised adaptation problem. In addition, the approach has a high "speed" of adaptation, i.e. it requires an extremely low number of labeled target training samples, even one per category can be effective. We then extensively compare this approach to the state of the art in domain adaptation in two experiments: one using datasets for handwritten digit recognition, and one using datasets for visual object recognition.


Connectionist Temporal Classification with Maximum Entropy Regularization

Neural Information Processing Systems

Connectionist Temporal Classification (CTC) is an objective function for end-toend sequence learning, which adopts dynamic programming algorithms to directly learn the mapping between sequences. CTC has shown promising results in many sequence learning applications including speech recognition and scene text recognition. However, CTC tends to produce highly peaky and overconfident distributions, which is a symptom of overfitting. To remedy this, we propose a regularization method based on maximum conditional entropy which penalizes peaky distributions and encourages exploration. We also introduce an entropybased pruning method to dramatically reduce the number of CTC feasible paths by ruling out unreasonable alignments. Experiments on scene text recognition show that our proposed methods consistently improve over the CTC baseline without the need to adjust training settings.