Saturday, March 29, 2025

Army eyes artificial intelligence to enhance future Golden Dome





The U.S. Army is looking to increase autonomy through artificial intelligence solutions to reduce the manpower needed to manage Golden Dome, President Donald Trump’s desired homeland missile defense architecture, the service’s program executive officer for missiles and space said this week.

As the Army contributes a large portion of the in-development air and missile defense architecture for Guam, it is looking to adapt those capabilities for a Golden Dome application, Maj. Gen. Frank Lozano told Defense News in an interview at Redstone Arsenal on Wednesday amid the Association of the U.S. Army’s Global Force Symposium in Huntsville, Alabama.

Some of the Army’s major contributions to the Guam Defense System include new modernized radars, an emerging Indirect Fire Protection Capability and its new Integrated Battle Command System, or IBCS.

“What we’re trying to do is three things,” Lozano said. “We’re wanting to integrate more AI-enabled fire control so that will help us reduce the manpower footprint. We’re wanting to create more remotely operated systems so that we don’t have to have so many operators and maintainers associated with every single piece of equipment that’s out there.”

And, he said, “We need to have more autonomously operated systems.”

Currently, the Army typically has a launcher with a missile and a launcher crew consisting of at least two to three soldiers.

“In the Golden Dome application, we would likely either have containerized missiles — think box of rockets — or we might actually put rockets and missiles in the ground,” Lozano said. Those systems would require less frequent upkeep, as a smaller manpower footprint means status checks might only happen every couple of weeks, and test checks would be conducted remotely, he said.

In order to work on such capability, the Army is planning to use what it learns from maturing the Guam Defense System, which will become operational in roughly 2027 with Army assets. The service will also pivot its Integrated Fires Test Campaign, or IFTC, from a focus on testing the Guam architecture incrementally to how to inject autonomy and AI into those systems for Golden Dome beyond 2026.

The IFTC in 2026 is considered the Guam Defense System “Super Bowl,” Lozano said. Then, beyond 2027, he said, “If we’re called upon to support Golden Dome initiatives, we need to have those advanced AI, remotely operated, autonomous-based formations and systems ready to go.”

To begin, the Army will be focused on defining the functions that human operators perform at all the operator terminals within an IBCS-integrated fire control center or at a particular launching station, Lozano said.

Once those functions are defined, Lozano said, the Army will have to define the data sources that drive action.

“We have to create the decision rubric that assesses and analyzes that data that then drives a human decision, and then we have to code AI algorithms to be able to process that information and make the right decision,” Lozano said. “There will be trigger points where the software has to say, ‘I’m not authorized to make that level of decision. It’s got to go back to the human and deliver.’”

For the first time, the Army’s Program Executive Office Missiles and Space is interacting with many new market entrants in the AI realm to work on the effort.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Friday, March 28, 2025

Data and artificial intelligence: the fuel behind space discovery




As maintaining leadership in space is a primary goal, particularly across the United States Space Force, NASA and other federal agencies, the U.S. remains focused on space exploration as a critical domain for missions, science investigation and national security.

One way to sustain a leading position is by achieving data dominance, leveraging artificial intelligence (AI) tools such as machine learning (ML) algorithms onboard space missions to facilitate and enable real-time decision-making.

These technologies can be used for engineering analysis and opportunistic science measurements. Data analytics and ML algorithms can also optimize resources, prioritize data to send back to Earth and identify patterns promptly.

The goal of these strategies is to develop spacecraft capable of real-time situational analysis, enabling them to make autonomous decisions and further optimize space missions. Developing and achieving autonomous science and exploration spacecraft requires a fundamental shift in the approach to space exploration. And beyond that, space organizations must navigate several technical considerations to successfully implement this vision, including environmental constraints and adapting solutions for specific mission objectives.

Data-fueled space missions

Data analytics and ML algorithms are a driving force behind space missions. They can optimize resources, such as fuel and energy usage, assist in planning and scheduling processes of observation strategies for in-space telescopes and support the prioritization of the data to first send back to Earth.

While ML algorithms on Earth can help identify patterns or correlations in massive datasets promptly, (Earth science mission teams are not scaling with the large amounts of data on hand) ML models onboard a spacecraft could help make missions even more efficient. A ML-enabled spacecraft on a life detection mission, for example, could analyze the data it gathers, identify the organic compound signatures in real-time and in the end prioritize other sampling locations without ground-in-the-loop intervention.

A long-term goal of this approach would be to have in situ analysis with spacecraft operating and analyzing in real-time, making autonomous decisions that prioritize scientific goals without depending solely on Earth-based operations.

Imagine a spacecraft on Saturn’s moon Enceladus, collecting data from the plumes being ejected at the south pole, then analyzing the data onboard and reprioritizing other operations without having to wait for a transmission from scientists on Earth — this could all be based on data collections using onboard AI-based models, software analysis and edge computing.

While this onboard implementation would help make decisions in situ to optimize resources and scientific returns, several hurdles must be overcome to see this vision through for a more efficient future.

Space exploration: data and challenges

One primary challenge in implementing AI-enhanced spacecraft is the limited onboard computing power, constrained by strict power and weight limitations that make distributing power among communication, mobility, running onboard experiments, computing and much more a difficult balancing act. Also, the “space-proofing” process — including thermal control, radiation shielding, and protection from meteoric and orbital debris complicates hardware development — and raises costs.

Bandwidth limitations and communication delays present another challenge in data transmission. Moreover, when the planetary target is not in Earth’s direct line of sight, communication becomes entirely impossible with traditional spacecraft.

Of all things, trusting the AI-driven strategy is a major challenge, particularly for life-detection missions. ML models are often seen as “black boxes,” making it difficult for scientists to fully trust the appropriate algorithms’ outcomes.

Embracing AI and ML in space exploration inspires true optimism and curiosity, and scientific discovery. To achieve this future, the industry must prioritize solutions like the development of hardware that allows real-time AI computations, advancement of data transmission tools and continuing investments in the Deep Space Network (DSN) to further enhance the efficiency of data transmission for missions.

One of the main difficulties is that the space industry must show that new hardware is truly impactful. Space missions rely on flight heritage — to prove that this new hardware works, industry needs a process to test and demonstrate tech that will have the new hardware onboard and show the success of the mission.

The testing of data-driven-enabled data prioritization must occur too — currently, space missions are designed to collect the amount of data that can be sent back to Earth. With AI-enhanced spacecraft, a fundamental shift can occur, as the ability to transmit data back to Earth will no longer be the significant bottleneck as data prioritization could be implemented.

The end goal is to collect as much data as the instrument onboard can, then have a smart algorithm onboard to prioritize the “most interesting” data to send back to Earth. More opportunities to test algorithms onboard during simulation and on low-risk science missions will bolster solutions’ technology readiness level.

Space missions have relied on pre-programmed instructions and extensive ground-in-the-loop analysis. This approach is a total change to the paradigm of space research and development, but becoming data-driven in space is necessary for success, especially when exploring targets farther away in our solar system.

Data-driven future

Private sector collaboration is key to helping transform space missions — involving expert perspectives would provide innovative solutions and strategies to help space teams develop spacecraft that can process, transport and interpret critical data.

This collaboration could be leveraged to enable real-time AI computations directly onboard the spacecraft, while also enhancing data processing pipelines for operations teams, from data collection to prioritization. Moreover, this collaboration could help accelerate the development of AI processors for space applications, ensuring they remain radiation-proof and extremely power-efficient. These collaborations are already occurring, such as NASA and IBM’s AI partnership.

Space agencies and the overall space industry must also implement an intelligent data collection solution, or data processing pipeline, that includes data collection, data labeling, analysis and then managing it appropriately so teams can access and make decisions in near real-time for mission-critical operations.

Data can also be leveraged in ML models for various applications, including anomaly detection, hardware failure prediction, and science data analysis. This can be done through training models on Earth and then fine-tuning them for specific space mission targets.

Using big data more effectively will also allow teams on Earth to develop visualization and simulation tools. These could involve digital twin investigations — virtual replicas of spacecraft and planetary environments to simulate missions and test algorithms before deployment — leading to smarter, more decisive actions for mission operations.

Cultivating a data-driven environment is not just about implementing next-gen tools, but it is a catalyst for the next frontier of space discovery and exploration.

Advancing AI-enhanced space exploration requires interdisciplinary collaboration (among AI experts, software engineers, astrobiologists and so forth) ensuring tools and models are adaptable and scalable. As computing power and onboard capabilities improve, data-intensive tasks (such as spectral analysis through ML) could increasingly be performed in space, to enable real-time insights and collaborative science discoveries, unlocking the next frontier of space exploration.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Friday, March 21, 2025

The conquest of Artificial Intelligence and its perils





It still seems incredible to us to think about how quickly artificial intelligence has advanced and the changes it has made in such a short time in humanity. It feels like it's been longer, but it was only on November 30, 2022, when the company OpenAI launched its ChatGPT project. This conversational bot changed everything from the processes of entire companies to the way education was conceived. Its conquest has been of such magnitude that the Nobel Prize committee awarded John Hopfield and Geoffrey Hinton the prize in Physics for discoveries and inventions that laid the foundations of machine learning and artificial intelligence. That is the dimension of the conquest of artificial intelligence concerning the methods that have changed, the tools available to us today, and present and future inventions.

Referring to the universe of Artificial Intelligence is like entering a deep sea full of complications that generate solutions. The award-winning researchers have used tools from physics to develop methods that are the basis of today's powerful machine learning. This gave the Nobel committee enough reasons to choose them. It is not a minor issue that these have been the merits to win such a prestigious award.

Professor Hopfield conducts his research at Princeton University, while Professor Hinton works at the University of Toronto. Both laureates applied fundamental concepts from statistical physics to designing artificial neural networks that function as associative memories and find patterns in large batches of data.

We may believe that Artificial Intelligence processes are complex — and they undoubtedly are — but the ease of using them in our daily activities is what allowed their applicability to be so popular and become so widespread. That is the paradigm through which Artificial Intelligence works, which is fast and accurate. For example, facial recognition by our mobile phones is just one example of this applicability. Our biometric data is used today to recognize us when entering and exiting an office or when accessing our bank accounts or various digital platforms. Something that we used to imagine as a science fiction fantasy, today is part of our daily lives.

The achievements of Artificial Intelligence will be as relevant as the progress generated by the Industrial Revolution. The leap is as great as the one experienced when artisanal production was exchanged for factory production. And, although I am sure that there was resistance and voices that criticized or went against it, in the face of progress there is no alternative. There are those who embrace it and those who do not, those who jump on the trend and those who are left behind.

Of course, the benefits and their effects are already evident. Professor Hinton predicted that artificial intelligence would "end up having a huge influence on civilization and lead to improvements in productivity and healthcare." This gives hope to people who suffer or see suffering from diseases that today seem to have no cure.

However, we cannot be naïve. In addition to applauding the progress and advances that Artificial Intelligence brings, we must also open our eyes, pay attention, and realize that not everything is pristine and perfect. It is necessary to address the concerns that are expressed about a series of possible bad consequences, such as the alteration of photographs — we already saw the Father in a very elegant white coat from Dolce & Gabbana — as well as other types of fraud and bad practices. We must consider the threat of these things getting out of control.

Of course, all that glitters is not gold. Although the committee awarded them the Nobel Prize in Physics, it also recognized that the science behind machine learning and artificial intelligence has its negative aspects. We need to understand that while machine learning has enormous benefits, its rapid development has also raised fears about our future. Fears that are legitimate.

Humanity in general, and each of us in particular, has a responsibility to use this new technology in a safe and ethical way for the greater benefit of human beings. We cannot cover the sun with one finger. Even Professor Hinton himself shares those concerns. He explains that he left a position at Google so he could speak more freely about the dangers of the technology he helped create. We must listen to the voices that warn about the excesses and misuses of technological advances.

Professor Hinton acknowledged being shocked by the recognition. "I'm dumbfounded. I had no idea this was going to happen," he said when contacted by the Nobel committee by phone. It is time to understand that machines learn and take charge of laying the foundations for the development of Artificial Intelligence. We thought that learning was an attribute of animal brains and now we see that artificial brains can also learn.

The work of both scientists has helped computers to be able to imitate human functions such as memory and learning. Hopfield created an associative memory in 1982, which could store and reconstruct images and other types of patterns in data. Hinton, for his part, developed a method that allows a machine to find properties in data autonomously and thus perform tasks such as identifying specific elements in images. These investigations and achievements paved the way for artificial intelligence systems such as ChatGPT.

"We have no experience of what it's like to have smarter things than us," he said. "It's going to be wonderful in many ways, but we also have to worry about a number of potential negative consequences." And, yes, the professor himself declares: "My guess is that, within five or 20 years, there will be a 50% chance that we will have to face the problem of artificial intelligence trying to take control of our lives."

To do this, we need to get down to work today and not 50 years from now. We need to lay the foundations for a responsible and ethical use of Artificial Intelligence, to enjoy its benefits, and to restrict its dark aspects. That way, we can enjoy the conquests of Artificial Intelligence before allowing it to conquer us.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Wednesday, March 19, 2025

Taco Bell Parent Accelerates AI Innovation With Nvidia




Quick-service restaurant (QSR) giant Yum! Brands has partnered with Nvidia to develop and scale artificial intelligence (AI) technologies for its restaurants.

These technologies will be deployed at the QSR company’s KFC, Taco Bell, Pizza Hut and Habit Burger & Grill restaurants, the companies said in a Tuesday (March 18) press release.

The partnership will help scale Yum! Brands’ existing proprietary AI-driven restaurant technology platform Byte by Yum!, according to the release.

“This partnership will enable us to harness the rich consumer and operational datasets on our Byte by Yum! integrated platform to build smarter AI engines that will create easier experiences for our customers and team members,” Joe Park, chief digital and technology officer at Yum! Brands and president of Byte by Yum!, said in the release.

The AI solutions will include voice-automated order-taking AI agents for drive-thru and call center operations, computer vision for optimizing drive-thru efficiency and back-of-house labor management, and AI-driven analytics and agents for assessing restaurant performance and generating personalized action plans for restaurant managers, according to the release.

Yum! Brands has already piloted several AI solutions in select Taco Bell and Pizza Hut locations and, after the success of the pilot, plans to roll out the technology to 500 Pizza Hut, Taco Bell, KFC and Habit Burger restaurants during the second quarter, the release said.

Looking ahead, the company plans to continue expanding its use of AI and integrate more advanced AI models, developing solutions that will be built with the latest Nvidia software and be proprietary to Yum!, per the release.

Andrew Sun, global director of retail, CPG and QSR business development at Nvidia, said in the release that working with the Yum! Brands team and platform to integrate Nvidia AI software “breaks barriers to AI innovation in the restaurant industry — delivering real-time, context-aware intelligence, powered by a scalable inference platform.”

Yum! Brands said in February that the Byte by Yum! AI-driven platform was already in use at 25,000 international locations and will be rolled out throughout its global locations.

In November, the company said that it had processed over 2 million successful orders with the drive-thru voice AI system it had in place in over 300 Taco Bell stores in the U.S. and that many franchisees were eager to test this innovation at their own locations.

Yum! Brands also said in November that early, limited pilots of AI-powered marketing campaigns had delivered double-digit increases in customer engagement compared to traditional digital marketing campaigns.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Tuesday, March 18, 2025

AI Investments Expected to Shift to Inference While Growing Faster Than Forecast





The impact of reasoning AI models from DeepSeek and OpenAI is reportedly expected to shift the focus of artificial intelligence (AI) investments while also boosting AI spending overall.

While the debut of the DeepSeek models led observers to question the need for investment in AI infrastructure, it also led to a greater focus on reasoning models, which require greater spending on inference, Bloomberg reported Monday (March 17).

As a result, Bloomberg Intelligence now expects the investments in AI by hyperscale companies like Amazon, Meta and Microsoft to increase faster than it previously forecast, with more on that money being spent on running AI systems after they have been trained, rather than on data centers and chips, according to the report.

These companies are expected to spend $371 billion on data centers and computing resources in 2025 — 44% more than they spent in 2024 — and $525 billion a year by 2032, the report said.

By 2032, nearly half of all AI spending will be directed toward inference, as reasoning models enable companies to make more money from software, per the report. At the same time, the share of investment directed toward training is expected to drop from 40% to 14%.

DeepSeek shook up the AI world in late January when it released AI models that performed on par with OpenAI’s and Google’s top models but at a fraction of the cost and with far fewer of Nvidia’s GPUs.

Shortly after the release of the DeepSeek AI model that rocked the AI world, Meta CEO Mark Zuckerberg said during an earnings call that the U.S. AI industry is shifting toward AI processing, or inference, as reasoning AI models rise in popularity.

OpenAI released in February what it called its “most cost-efficient” reasoning AI model, the o3-mini, saying it is a “small” but “powerful and fast” model that outperforms earlier models especially in science, coding and math, and comes in three reasoning levels: low, medium and high for tougher tasks.

The model is part of OpenAI’s o1 series, which can reason through tasks but takes longer to respond than non-reasoning models. Reasoning models can also tackle tougher tasks and solve harder problems.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Monday, March 17, 2025

The first operating system for quantum networks has been built





As a step towards a useful and ultra-secure quantum internet, researchers have created an operating system that coordinates connected quantum computers, no matter what hardware they use

Linking quantum computers to each other just became easier, as researchers have created the first operating system for quantum networks.

The operating system the team built is software that can to control the devices within a quantum network regardless of the type of qubits, or quantum bits, that make them up. Controlling devices like this is made more difficult by the fact that networked quantum computers receive both quantum information from other quantum computers as well as traditional signals from classical computers that help interface with them.

To show that their operating system, called QNodeOS, can handle both, the researchers tested it with two different kinds of quantum computers, and several different tasks. They used two quantum computers made from specially processed diamonds, and another made from electrically charged atoms. With these two types of quantum hardware, the researchers ran a test program for delegated quantum computing – similar to using your laptop to perform a calculation in the cloud. They also tested the ability of QNodeOS to handle multitasking by running two programs at once.

Joe Fitzsimons at the quantum computing start-up Horizon Quantum, based in Singapore and Ireland, says this is a significant advance in laying down the foundations for a quantum intehttps://computer-vision-conferences.scifat.com/rnet. He says that “once you start taking the idea of building general-purpose quantum networks seriously, there ends up being a lot of work to do”, and this new operating system leads to a long list of things to develop next, such as routing protocols.

Wehner says developing QNodeOS has been like drawing up a colouring page – they have outlined all the shapes and will now be hard at work colouring them all in. For instance, the work raised the question of how to write scheduling programs for a quantum network. “This wasn’t even on my radar before, but now I am so excited about it,” she says.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Friday, March 14, 2025

Machine learning precisely predicts material characteristics for high-performance photovoltaics





In the lab, perovskite solar cells show high efficiency in converting solar energy into electricity. In combination with silicon solar cells, they could play a role in the next generation of photovoltaic systems. Now researchers at KIT have demonstrated that machine learning is a crucial tool for improving the data analysis needed for commercial fabrication of perovskite solar cells. They present their results in Energy & Environmental Science.

Photovoltaics is a key technology in efforts to decarbonize the energy supply. Solar cells using perovskite semiconductor layers already boast very high efficiency levels. They can be produced economically in thin and flexible designs.

"Perovskite photovoltaics is at the threshold of commercialization but still faces challenges in long-term stability and scaling to large surface areas," said Professor Ulrich Wilhelm Paetzold, a physicist who conducts research at the Institute of Microstructure Technology and the Light Technology Institute (LTI) at KIT. "Our research shows that machine learning is crucial to improving the monitoring of perovskite thin-film formation that's needed for industrial production."


With deep learning (a machine learning method that uses neural networks), the KIT researchers were able to make quick and precise predictions of solar cell material characteristics and efficiency levels at scales exceeding those in the lab.

A step toward industrial viability


"With measurement data recorded during production, we can use machine learning to identify process errors before the solar cells are finished. We don't need any other examination methods," said Felix Laufer, an LTI researcher and lead author of the paper. "This method's speed and effectiveness are a major improvement for data analysis, making it possible to solve problems that would otherwise be very difficult to deal with."


By analyzing a novel dataset documenting the formation of perovskite thin films, the researchers leveraged deep learning to identify correlations between process data and target variables such as power conversion efficiency.


"Perovskite photovoltaics has the potential to revolutionize the photovoltaics market," said Paetzold, who heads the LTI's Next Generation Photovoltaics department. "We show how process fluctuations can be quantitatively analyzed with characterization methods enhanced by machine learning techniques to ensure high material quality and film layer homogeneity across large areas and batch sizes. This is a crucial step toward industrial viability."



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Artificial intelligence in manufacturing innovations




An established conference focused on bringing university and industry research to manufacturing professionals is the latest to include artificial intelligence (AI) as a key component.

The University of Tennessee recently hosted hundreds of manufacturing professionals at the annual Manufacturing and Reliability Conference (MARCON 2025) in Knoxville. From keynote speeches and panels to exposition vendors, AI was the most commonly used new catchphrase.

MARCON dedicated one of its key panels to experts discussing practical uses for AI in plant operations. Industry leader Paul Casto of GE Digital said we are already “using AI to make better decisions based on data.” Then he and other panelists agreed it’s critical to have a process in place in order to act on early alert via AI about problems with equipment. They also emphasized that security protocols are essential.

Director Vasileios Maroulas PhD, of the AI Tennessee Initiative at the University of Tennessee, announced during his keynote address that he’s putting out a new Request for Proposals for AI research connecting UT and the business world. AI TechX intends to build a team of academics and industry leaders to show how artificial intelligence can support innovation and jobs in Tennessee. Maroulas is telling audiences in simplest terms, “AI transforms every single industry … at various levels,” so this is an important time for more research.

At the Knoxville Convention Center’s vast expo area filled with MARCON vendors, nearly every booth included a built-in conversation about AI’s role in competitive technologies related to preventive maintenance. Business-to-business offerings included everything from consulting to software to sensors for keeping plants running smoothly.

“It makes sound visible,” described Daus Studenberg, national products manager, about the Ludeca Crysound tool that uses ultrasound and infrared imaging to detect air and gas leaks at plants as well as electrical arcing.

Cory Burns of RDI Technologies explained that his camera and software combination called Iris M can detect tiny imperfections in plant machinery long before it becomes a problem. “We use all these reliability tools to get to the solution. Ours can get there in three seconds.”

Studenberg, Burns and their other competitors using vibration detection to spot equipment issues all say that AI has a part in how the software tools work within their respective systems for detecting problems and communicating through data for R & M professionals.

Loctite Adhesives & Reliability Specialist Kimberly Smith pointed out how AI adoption relates to a lack of enough people to fill skilled jobs. “The workforce issues have accelerated the adoption of AI-enabled or smart devices in manufacturing plants.”


Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Wednesday, March 5, 2025

AI in vehicle development: Large Language Models




The Rosetta stone discovered in 1799 was a milestone in the deciphering of Egyptian writings. It contains a priestly decree from the ancient Greek-Macedonian-Ptolemaic dynasty, dated to 196 BC, in three different languages. By a comparison of the texts and characters, it offered a means of deciphering the Egyptian hieroglyphics, which were not decipherable until the 19th century. Since that time, the term ‘Rosetta Stone’ has been used to refer to an essential clue in decryption tasks. Today, AI-based language models, known as large language models (LLMs), are regarded as the Rosetta Stone of the future. “A large language model is based on neural networks and is able to decode the meaning of natural language in context and machine-process it. LLMs can understand, process, and translate language, but also generate new texts,” explains Dr. Joachim Schaper, Senior Manager AI and Big Data at Porsche Engineering.

Porsche Engineering uses LLMs to further increase efficiency in the development process. The company uses commercially available LLM tools such as ChatGPT from OpenAI or LLaMa from Meta. “These models are pre-trained by very large amounts of data from the internet and handle tasks such as writing texts on standard topics very well. For use in development, however, we need an LLM that also takes into account our engineering expertise,” says Schaper.

The technical knowledge of Porsche Engineering is taught to the AI using its own data sets from completed development projects. One area of application for LLMs is the revision of customer specifications. Depending on the project, the client, and the development team, their content is written in very different forms. If an existing system is to be technically updated as part of a further development, Porsche Engineering often receives the requirements from existing customer specifications and the scope of the changes from its customers.

Before the actual development task starts, the developers must completely work through the customer specifications and translate the information contained therein into concrete technical specifications in order to avoid development errors due to ambiguous specifications. Porsche Engineering has recently begun using predefined block templates in the revision of specifications: A basic principle of requirements engineering for the standardized and qualitative creation of requirements. With the aid of this methodology, information is presented in a way that is clear, consistent, verifiable, accurate, and understandable. “Today, our engineers have to do the revision of the specifications as a manual activity. This ties up resources in development and is a monotonous activity for the employees,” says Volker Reber, Senior Manager High-Voltage System Development at Porsche Engineering.
Understanding the context

This task cannot be automated with conventional algorithms. As the formulations in the specifications are frequently not clear, the meaning has to be inferred from the context. Conventional software programs cannot do this intellectual step, but AI can. In the future, LLMs will therefore support the revision of specifications. “As a demonstration project, we revised the requirements catalog for a component of a vehicle,” reports Reber. To train the LLM, a dataset with a few hundred items of information was enough to prepare it for the new task. The model learned how to deal with different semantic forms in the original texts and also learned the text patterns for the output.

“After this step, the specifications, consisting of a few thousand individual items of information, could be converted to the standard format much faster than through manual processing,” says Reber. As an extension of the project, the trained LLM is used for additional tasks in the field of specifications revision, such as checking for completeness and consistency with regard to the requirements of different systems of the vehicle described therein. The engineers ‘only’ have to check the result produced by the LLM, which means that the workload will decrease over time. “Since the AI is further trained by feedback of the results, the quality of the LLM continuously increases with each project. In the future, it will not only deliver faster, but also much better implementations than a human being could ever do,” Schaper thinks. The LLM has already reduced the effort by around 50 percent during the first test. There are already ideas for further optimizations, which will enable this figure to be improved significantly. Nevertheless, human expertise will still be needed for these tasks in the future.

For Porsche Engineering, the combination of specifically designed and trained AI systems and human expertise has a strategic importance. Many engineering tasks consist of sub-areas that require varying degrees of expertise, experience, and evaluation. Some companies already rely on regions with the best personnel cost structure for certain areas of activity. Porsche Engineering relies on the use of tools such as AI for comparable activities and continues to pursue a strategy of high-level competence among its employees. The experts can then concentrate their valuable working time on the high-competence part of the task. LLMs also offer potential for increasing efficiency in other areas of vehicle development. One example is data management during test drives with new vehicles or systems. If the test drivers detect a malfunction during the tests, they log it and feed it into a central database system.

“Today, we have the challenge that unexpected system reactions are often not recognized as a previously recorded phenomenon and are entered into the system several times,” explains Dr. Fabian Hinder, Lead Engineer at Porsche Engineering. This makes systematic troubleshooting more difficult, as the analysis of database information is associated with considerable manual effort.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com

Monday, March 3, 2025

CLOUD DATA SECURITY SOLUTIONS PROTECTING YOUR BUSINESS FROM CYBER THREATS





The increase in Cyber Threats and attacks is estimated to be $10.5 trillion in 2025. Now, if we consider that back in 2015, a $3 trillion estimate, you can imagine just how great of a risk cyber threats pose to businesses across the globe, and that's not all. In 2023 alone, ransomware attacks are estimated to grow every 11 seconds. Talking about the cybercrime of 23% along with the data breached, it's... well, $4.45 million, which companies around the globe had to suffer due to the financial constraints it posed. To add more salt to the wound, 91% of the companies were facing cybersecurity risks just from cloud technologies, and this allowed phishing scams to succeed, raising the year's estimate to 83% from just one attempt to multiple over a year. This harsh reality indicates a growing need for solutions that would help secure cloud data better with encryption, disaster recovery, monitoring and other similar technologies.
 
Why Businesses Need Cloud Data Security Solutions

In today's digital age, businesses have found it convenient to use the cloud for storage and service purposes. The use of the cloud itself provides a wide array of benefits, from being highly accessible to allowing for easy scalability, and with the ease provided comes immense responsibility, that being, the cloud is also highly prone to cybercrimes. Safeguarding sensitive business information in the cloud requires cloud data security solutions that can reduce the risks and provide protection against loss of data coverage of sensitive data. 96% of businesses around the world use public cloud services to stay competitive in the market, and 84% use private clouds. The statistics below shed light on the current state of cloud data security and emphasize the importance of organizations investing in comprehensive protection strategies.

1. Increasing Frequency of Cloud Attacks: The data reveals that 80% of organizations report a rise in the number of attacks made in relation to the cloud. These include

Data Breaches: 33% of cloud attacks have been caused due to this.

Environment Intrusion Attacks: About 27% of attacks are made this way.

Cryptojacking: This has been the cause of 23% of incidents.

Failed Audits: This has caused 15% of the attacks.

2. Prevalence of Cloud-Stored Data Breaches

As we know, 82% of mobile users' data stored on the cloud has been compromised, which is exactly why tighter data security in the cloud is necessary. 

3. Cloud Security Market Growth

The global cloud security market was valued at $20.54 billion in 2022 and is projected to reach $148.3 billion by 2032, which reflects a compound annual growth rate (CAGR) of 22.5%.

4. Security Incidents in Cloud Environments

Common security incidents during cloud runtime include unauthorized access, which 33% of organizations reported, and misconfigurations, which 32% of organizations experienced.

5. Cloud Security Prioritization

Reports indicate that securing the cloud is a major concern for all IT leaders, with 72% ranking it as the top priority for their organizations. This growing recognition of the need to protect cloud environments against evolving threats and ensure the safety of sensitive data and systems is a significant development in cybersecurity.

Strategic Ways to Implementing Cloud Data Security Solutions and their impact

Data Encryption will ensure your data remains secure by converting it into unreadable code. Cloud data security solutions provide encryption both in transit and at rest, ensuring that sensitive information remains protected even if accessed by unauthorized parties.

Access Control and Authentication Implementing strong access controls ensures that only authorized users can access sensitive data. Multi-factor authentication (MFA) and role-based access control (RBAC) are crucial features that limit access based on user roles and verification processes.

Continuous Monitoring and Threat Detection Advanced cloud security solutions offer real-time monitoring and detection of potential threats. By leveraging artificial intelligence and machine learning, these tools can identify and respond to unusual activities before they escalate into full-blown attacks.

Compliance Management Many industries have stringent regulations regarding data security and privacy. Cloud data security solutions help businesses comply with GDPR, HIPAA, and ISO 27001 standards by implementing necessary safeguards and providing audit trials.

Backup and Disaster Recovery A comprehensive security strategy includes regular backups and a robust disaster recovery plan. This ensures that critical business data can be restored quickly in the event of a breach or system failure.

Benefits of Using Cloud Data Security Solutions

Advanced Threat Protection

Cloud security measures employ advanced technologies such as encryption, artificial intelligence-powered threat detection, and real-time surveillance to safeguard sensitive information from cyberattacks, unauthorized access, and data breaches.

Regulatory Compliance

These solutions assist companies in complying with strict regulations like GDPR, HIPAA, and ISO 27001, enabling legal adherence and preventing possible fines or penalties.

Cost savings

By migrating security functions to cloud-based operations centres, companies eliminate the need for costly in-house technologies. As a result, expenses related to data breaches, litigation, and lost revenue are avoided. Security tools can also be attuned to an organization's needs with the help of pay-as-you-go pricing strategies.

Business Continuity and Disaster Recovery

Due to the frequent automated backups performed in multiple locations, downtimes caused by a system failure, a cyberattack, or a natural disaster can be reduced to a minimum through cloud solutions, allowing for business upkeep.

Scalability and Flexibility

With a fraction of manual labour required, cloud solutions expand in accordance with the growth of the business in storage, security, and all features, quickly setting them up according to the ever-evolving threats and operational necessities.

Enhanced Data Availability

Cloud systems provide willing users with free, unrestricted access to their important data from anywhere in the world, enhancing their cooperation and productivity with proper security measures in place.

Automated Updates and Maintenance

Providers regularly update cloud security systems to address emerging threats and vulnerabilities, reducing the burden of manual maintenance on your IT team.

Key Strategies for Strengthening Cloud Data Security Choose Reputable Cloud Providers Partner with cloud service providers that prioritize security and have a proven track record of safeguarding data.
Regularly Update Security Protocols Stay ahead of emerging threats by regularly updating your security measures and software.
Educate Employees Human error is a leading cause of data breaches. Train your staff on best practices for data security, including recognizing phishing attempts and securing login credentials.
Conduct Regular Audits Periodic audits of your cloud security measures can help identify vulnerabilities and areas for improvement.


 
Conclusion

Approximately 60% of global business data is now kept in cloud storage. However, Projections show that the global cloud computing sector is expected to expand at a compound annual growth rate (CAGR) of 16.3% until 2026. By 2027, the top three industries investing in public cloud services will be the banking sector, telecommunications, and software and information services, with a combined expenditure of $326 billion. It is predicted that by 2026, spending on public cloud services will make up more than 45% of overall enterprise IT budgets, compared to less than 17% in 2021.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member
Contact us : computer@scifat.com