modernization
Turning migration into modernization
The VMware shake up has led to an IT inflection point. Leaders are now weighing whether to renew, migrate, or redesign entirely for the cloud era. In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap. Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes.
- Information Technology > Virtualization (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.37)
EvoGraph: Hybrid Directed Graph Evolution toward Software 3.0
Costa, Igor, Baran, Christopher
We introduce **EvoGraph**, a framework that enables software systems to evolve their own source code, build pipelines, documentation, and tickets. EvoGraph represents every artefact in a typed directed graph, applies learned mutation operators driven by specialized small language models (SLMs), and selects survivors with a multi-objective fitness. On three benchmarks, EvoGraph fixes 83% of known security vulnerabilities, translates COBOL to Java with 93% functional equivalence (test verified), and maintains documentation freshness within two minutes. Experiments show a 40% latency reduction and a sevenfold drop in feature lead time compared with strong baselines. We extend our approach to **evoGraph**, leveraging language-specific SLMs for modernizing .NET, Lisp, CGI, ColdFusion, legacy Python, and C codebases, achieving 82-96% semantic equivalence across languages while reducing computational costs by 90% compared to large language models. EvoGraph's design responds to empirical failure modes in legacy modernization, such as implicit contracts, performance preservation, and integration evolution. Our results suggest a practical path toward Software 3.0, where systems adapt continuously yet remain under measurable control.
Can LLMs Replace Humans During Code Chunking?
Glasz, Christopher, Escamilla, Emily, Scott, Eric O., Patel, Anand, Zimmer, Jacob, Diggs, Colin, Doyle, Michael, Rosen, Scott, Naik, Nitin, Brunelle, Justin F., Thaker, Samruddhi, Poudel, Parthav, Sridharan, Arun, Madan, Amit, Wendt, Doug, Macke, William, Schill, Thomas
Large language models (LLMs) have become essential tools in computer science, especially for tasks involving code understanding and generation. However, existing work does not address many of the unique challenges presented by code written for government applications. In particular, government enterprise software is often written in legacy languages like MUMPS or assembly language code (ALC) and the overall token lengths of these systems exceed the context window size for current commercially available LLMs. Additionally, LLMs are primarily trained on modern software languages and have undergone limited testing with legacy languages, making their ability to understand legacy languages unknown and, hence, an area for empirical study. This paper examines the application of LLMs in the modernization of legacy government code written in ALC and MUMPS, addressing the challenges of input limitations. We investigate various code-chunking methods to optimize the generation of summary module comments for legacy code files, evaluating the impact of code-chunking methods on the quality of documentation produced by different LLMs, including GPT-4o, Claude 3 Sonnet, Mixtral, and Llama 3. Our results indicate that LLMs can select partition points closely aligned with human expert partitioning. We also find that chunking approaches have significant impact on downstream tasks such as documentation generation. LLM-created partitions produce comments that are up to 20% more factual and up to 10% more useful than when humans create partitions. Therefore, we conclude that LLMs can be used as suitable replacements for human partitioning of large codebases during LLM-aided modernization.
- Health & Medicine (0.91)
- Government (0.66)
Application Modernization with LLMs: Addressing Core Challenges in Reliability, Security, and Quality
Ponnusamy, Ahilan Ayyachamy Nadar
AI-assisted code generation tools have revolutionized software development, offering unprecedented efficiency and scalability. However, multiple studies have consistently highlighted challenges such as security vulnerabilities, reliability issues, and inconsistencies in the generated code. Addressing these concerns is crucial to unlocking the full potential of this transformative technology. While advancements in foundational and code-specialized language models have made notable progress in mitigating some of these issues, significant gaps remain, particularly in ensuring high-quality, trustworthy outputs. This paper builds upon existing research on leveraging large language models (LLMs) for application modernization. It explores an opinionated approach that emphasizes two core capabilities of LLMs: code reasoning and code generation. The proposed framework integrates these capabilities with human expertise to tackle application modernization challenges effectively. It highlights the indispensable role of human involvement and guidance in ensuring the success of AI-assisted processes. To demonstrate the framework's utility, this paper presents a detailed case study, walking through its application in a real-world scenario. The analysis includes a step-by-step breakdown, assessing alternative approaches where applicable. This work aims to provide actionable insights and a robust foundation for future research in AI-driven application modernization. The reference implementation created for this paper is available on GitHub.
Lean Methodology for Garment Modernization
Kong, Ray Wai Man, Kong, Theodore Ho Tin, Huang, Tianxu
Lean Methodology for Garment Modernization. This article presents the lean methodology for modernizing garment manufacturing, focusing on lean thinking, lean practices, automation development, VSM, and CRP, and how to integrate them effectively. While isolated automation of specific operations can improve efficiency and reduce cycle time, it does not necessarily enhance overall garment output and efficiency. To achieve these broader improvements, it is essential to consider the entire production line and process using VSM and CRP to optimize production and center balance. This approach can increase efficiency, and reduce manufacturing costs, labor time, and lead time, ultimately adding value to the company and factory.
- Asia > China > Hong Kong (0.06)
- Oceania > Australia > South Australia > Adelaide (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (2 more...)
- Research Report (0.82)
- Workflow (0.70)
FinEmbedDiff: A Cost-Effective Approach of Classifying Financial Documents with Vector Sampling using Multi-modal Embedding Models
Biswas, Anjanava, Talukdar, Wrick
Accurate classification of multi-modal financial documents, containing text, tables, charts, and images, is crucial but challenging. Traditional text-based approaches often fail to capture the complex multi-modal nature of these documents. We propose FinEmbedDiff, a cost-effective vector sampling method that leverages pre-trained multi-modal embedding models to classify financial documents. Our approach generates multi-modal embedding vectors for documents, and compares new documents with pre-computed class embeddings using vector similarity measures. Evaluated on a large dataset, FinEmbedDiff achieves competitive classification accuracy compared to state-of-the-art baselines while significantly reducing computational costs. The method exhibits strong generalization capabilities, making it a practical and scalable solution for real-world financial applications.
- Banking & Finance (1.00)
- Information Technology > Software (0.34)
Seeking a successful path to core modernization
Modern banking is a far cry from the analog processes of yesteryear. Today's smartphone-wielding customers demand hyper-personalized transactions woven seamlessly into everyday life, one-to-one personal service with their data at agents' fingertips, and instantaneous financial insights--and some are even pressing for features like blockchain integration and support for digital currencies. But the ability to serve these customers is not assured: Gartner predicts that by 2025 more than 85% of organizations will move forward with cloud principles, but will not yet be able to fully use cloud-native architectures and technologies. These tools will be key to banks' ability to move to digital ecosystem platforms and develop new services, partner with other players, work effectively with colleagues, and meet customer expectations. Financial institutions are under pressure to future-proof and accommodate emerging technologies such as artificial intelligence (AI), machine learning (ML), and cloud computing--and they are also facing significant infrastructural strains.
Detecting Fake Job Postings Using Bidirectional LSTM
Fake job postings have become prevalent in the online job market, posing significant challenges to job seekers and employers. Despite the growing need to address this problem, there is limited research that leverages deep learning techniques for the detection of fraudulent job advertisements. This study aims to fill the gap by employing a Bidirectional Long Short-Term Memory (Bi-LSTM) model to identify fake job advertisements. Our approach considers both numeric and text features, effectively capturing the underlying patterns and relationships within the data. The proposed model demonstrates a superior performance, achieving a 0.91 ROC AUC score and a 98.71% accuracy rate, indicating its potential for practical applications in the online job market. The findings of this research contribute to the development of robust, automated tools that can help combat the proliferation of fake job postings and improve the overall integrity of the job search process. Moreover, we discuss challenges, future research directions, and ethical considerations related to our approach, aiming to inspire further exploration and development of practical solutions to combat online job fraud.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
Verta Insights Study Reveals Companies Continue to Push Investments in AI Technology and Talent Despite Economic Headwinds
WIRE)--Verta, the Operational AI company, today released findings from the 2023 AI/ML Investment Priorities study, which surveyed more than 460 AI and machine learning (ML) practitioners to benchmark AI/ML spending plans across industry sectors in light of evolving technology trends, industry developments, and macroeconomic conditions. The study was conducted by Verta Insights, the research practice of Verta Inc., and found that nearly two-thirds of organizations are planning to either increase or maintain their spending on AI/ML technology and infrastructure despite economic headwinds in the broader market. "We currently are experiencing an inflection point for the AI/ML industry, with technologies like ChatGPT and Stable Diffusion driving heightened interest in how companies can leverage machine learning models to significantly automate human-based activities with very innovative and game-changing capabilities. Findings from our research study confirm that organizations are continuing to make significant investments in AI/ML technology and talent, despite turbulence in the market, as they orient their business strategies around creating intelligent experiences for their customers," said Conrado Silva Miranda, Chief Technology Officer of Verta. In the research study, 31% of respondents said that their organizations would increase AI/ML spending in 2023 due to the current economic conditions, while 32% said that they would maintain 2022 spending levels for AI/ML technology and infrastructure.
- Questionnaire & Opinion Survey (0.93)
- Press Release (0.65)
- Information Technology (0.48)
- Banking & Finance (0.35)
Action and Inaction on Data, Analytics, and AI
The title of this column series is "AI in Action," and there has indeed been a lot of action over the past year. Judging from the 11th annual NewVantage Partners survey of senior data and analytics executives, some trends are moving in the right direction. For example, more companies are creating senior roles to focus on data and analytics. The chief data officer role has quickly become much more common over time and across more industries; in the survey, 83% of companies have appointed a CDO or chief data and analytics officer (CDAO). An increasing number of companies (69% in this year's survey) are officially incorporating analytics and AI into the CDO role, and we think that's a good idea.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science (0.67)