Tuesday, April 29, 2025

AIT – Use of artificial intelligence, big data and autonomous synthesis to accelerate the development of tomorrow’s battery technology




AIT – Use of artificial intelligence, big data and autonomous synthesis to accelerate the development of tomorrow’s battery technology

Batteries play a central role in reducing CO₂ emissions in transportation, energy, and industry. However, the development of new battery materials is still a lengthy process. It is usually based on classic trial-and-error methods that often take over a decade. To accelerate these processes, new approaches are needed that intelligently combine digital technologies and automation.

This is precisely where the European research project FULL-MAP comes in. The aim is to develop a fully integrated, AI-supported platform for accelerating material and interface development that simulates, digitalises and automates the entire battery development process – from material development to cell testing. The use of artificial intelligence, machine learning, automated synthesis and high-throughput characterization is expected to take battery development to a new technological level.


Innovation for a new generation of batteries

FULL-MAP is taking a holistic approach to drastically reducing the time from material discovery to the deployment of next-generation batteries. The key project objectives are:Building an interoperable data framework for the structured collection, use and reuse of information on battery materials and interfaces.
Development of adaptable design and simulation tools that use artificial intelligence and machine learning methods to derive suitable material structures and configurations from target specifications such as specific material properties, thereby accelerating complex simulation processes across multiple physical scales.
Further development of analysis methods and automation of high-throughput characterisation modules and technologies for fast, reliable and scalable analysis of battery materials and interfaces.
Development of AI-controlled autonomous synthesis robots to efficiently synthesise, test and further develop novel materials through data-driven iterations.
Strengthening the European research and innovation system and market positioning of the EU in the field of batteries through international cooperation and knowledge dissemination.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Artificial superintelligence (ASI): Sci-fi nonsense or genuine threat to humanity?





Rapid progress in artificial intelligence (AI) is prompting people to question what the fundamental limits of the technology are. Increasingly, a topic once consigned to science fiction — the notion of a superintelligent AI — is now being considered seriously by scientists and experts alike.

The idea that machines might one day match or even surpass human intelligence has a long history. But the pace of progress in AI over recent decades has given renewed urgency to the topic, particularly since the release of powerful large language models (LLMs) by companies like OpenAI, Google and Anthropic, among others.

Experts have wildly differing views on how feasible this idea of "artificial super intelligence" (ASI) is and when it might appear, but some suggest that such hyper-capable machines are just around the corner. What’s certain is that if, and when, ASI does emerge, it will have enormous implications for humanity’s future.

"I believe we would enter a new era of automated scientific discoveries, vastly accelerated economic growth, longevity, and novel entertainment experiences," Tim Rocktäschel, professor of AI at University College London and a principal scientist at Google DeepMind told Live Science, providing a personal opinion rather than Google DeepMind's official position. However, he also cautioned: "As with any significant technology in history, there is potential risk."
What is artificial superintelligence (ASI)?

Traditionally, AI research has focused on replicating specific capabilities that intelligent beings exhibit. These include things like the ability to visually analyze a scene, parse language or navigate an environment. In some of these narrow domains AI has already achieved superhuman performance, Rocktäschel said, most notably in games like go and chess.

The stretch goal for the field, however, has always been to replicate the more general form of intelligence seen in animals and humans that combines many such capabilities. This concept has gone by several names over the years, including “strong AI” or “universal AI”, but today it is most commonly called artificial general intelligence (AGI).

"For a long time, AGI has been a far away north star for AI research," Rocktäschel said. "However, with the advent of foundation models [another term for LLMs] we now have AI that can pass a broad range of university entrance exams and participate in international math and coding competitions."



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Saturday, April 26, 2025

Artificial intelligence models fall short of predicting social interactions





A study from Johns Hopkins University reveals that humans excel over AI in understanding social interactions in motion, an essential skill for technologies such as self-driving cars and assistive robots.

Current AI struggles to recognize human intentions, such as whether a pedestrian is about to cross the street or if two people are engaged in conversation. Researchers suggest that this issue stems from how AI is built, as it cannot fully grasp social dynamics.

To compare AI models with human perception, researchers had people watch short video clips and rate how well they understood the social interactions depicted. The videos showed people engaging with each other, doing activities side by side, or acting independently.

Next, they tested over 350 AI models—spanning language, video, and image processing—asking them to predict how humans would judge the videos and how their brains might respond. For large language models, they analyzed human-written captions to see how well AI understood social dynamics.

Researchers found that human participants generally agreed on how they interpreted social interactions in videos, but AI models struggled, regardless of their training data or size.

This highlights a major gap in AI’s understanding of unfolding social dynamics. Scientists believe this limitation stems from AI’s design, as current models are built like the part of the human brain that processes static images, unlike the region responsible for interpreting dynamic social scenes. The study suggests that AI cannot still fully mimic how humans naturally perceive and respond to social interactions.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Thursday, April 24, 2025

Future-Proofing Your Machine Learning Career in a Rapidly Changing Industry





Introduction

Machine learning is one of the most rapidly evolving technological fields. Most of today's biggest trends, latest outperforming model, etc., become outdated tomorrow. Therefore, every professional or aspiring professional in this field should commit to continuous learning and adaptability to stay fresh on latest advances and trends. Far from sounding like a chore, this can be seen as an opportunity to be part of one of the most dynamic and enthralling fields shaping many aspects of our future lives.

In this opinion article, I distill some key insights, tips, and best practices to help you future-proof your machine learning career. My experience in this field has been multifaceted: primarily focused on education, but also encompassing research, industry, and consultancy. The opinions below are drawn from my own journey and insightful conversations with colleagues across the machine learning landscape.

Below I share the three key tips I consider essential for any machine learning professional to future-proof their career no matter their prior background.


1. Be Willing to Learn New Things Constantly

This may sound quite obvious as we are talking about a constantly evolving subdomain of AI. Almost nobody had heard of large language models (LLMs) a few years ago, yet today they are the biggest AI trend. The bottom line: part of your daily work as an machine learning professional must be devoted to learning and being curious about emerging technologies, frameworks, research papers, and industry applications.

If you are a researcher, you may prioritize depth over breadth, that is, delving deep into a very specific topic being investigated by the machine learning scientific community — e.g. novel neural activation functions to mitigate the exploding gradients problem (just a random example!). Meanwhile, if you are an educator or content creator, you may instead focus on breadth over depth, that is, gaining a comprehensive and not-too-deep understanding of every area and trend across the machine learning landscape.

Some strategic hacks to make this constant learning process more appealing are: listening to podcasts or watching videos during commutes or idle periods, setting aside "learning sprints" weekly if you are an agile methodology advocate, or engaging in active learning by building small projects to apply new concepts. Are you living in a larger city? Try to find meetups, hackathons, and similar initiatives organized by machine learning local communities. That's a good way to keep learning, network with others, and sometimes enjoy free pizza.


2. Know Yourself

Exercise introspection and self-awareness to gain a clear understanding of the direction you want to follow in your machine learning career. As an increasingly large and interdisciplinary field, there are many possible pathways, so you need to chart your own course. A passionate programmer with an interest in software systems integration may feel comfortable pursuing an machine learning engineering career, whereas someone driven by data analysis, statistical modeling, and deriving actionable insights would fit better in the role of a data scientist.

Not sure where to start in this introspection exercise? Try asking yourself these 4 questions:What excites me most about machine learning? Is it building and optimizing models, uncovering insights from data, or deploying systems at scale? In my case, while I enjoy training and optimizing models and analyzing data, what I enjoy the most is (take a wild guess...!): teaching and educating others, especially those new to the field. Meanwhile, let's admit it: deploying systems at scale is not my cup of tea. And that's valid: the key is knowing clearly what you enjoy and what you do not enjoy doing. In machine learning, due to the diversity of tasks and roles involved, it is possible to focus on what excites you the most.
What are my strengths and weaknesses? Do you excel in coding and systems thinking, or are you better off at statistical analysis and data experiments? In industry, I felt I could add more value by analyzing business problems and translating them into machine learning-based solutions that match and address the identified problems effectively, sometimes even proposing something innovative. Sure, I could contribute to implementation codes if a helping hand was needed, but I felt my greatest potential for differential contributions lay in earlier stages of the machine learning development lifecycle.
What are my strengths and weaknesses? What type of work environment suits you? Do you prefer an office, remote, or hybrid setting? Are you more productive in research-focused roles, industry-driven teams, or independent freelancing? The answer to these questions may not be as critically determinant in the direction of your machine learning career, but they may still influence the kind of roles you want to pursue. In my case, as of today, I got it very clear: fully remote, freelance work is my way to go, although the opportunity of occasionally partaking in physical events as a speaker continues to be something very attractive, given my passion for public speaking and dissemination of machine learning knowledge.
Which machine learning applications resonate with me? Do you feel drawn to natural language processing, computer vision, recommendation systems, or something else? Are you concerned about sustainability, health, or other causes, and would you like to find an machine learning role in a company in a related sector?


3. Let Others Know You

Once you know yourself clearly and have defined the right direction for your machine learning career, it's time to build your profile and make it visible to others interested in your experiences and skills.

Maintain an organized GitHub repository showcasing your projects, code quality, and contributions. My work repository, for instance, is rather focused on educational projects like courses and training for companies, hence one of the resources I add to it is a compilation of public datasets for teaching purposes.

You should also optimize your LinkedIn profile to highlight relevant achievements, certifications, and roles, and actively engage with the machine learning community by sharing insights or articles. I try my best to do this by advertising my articles written on this website!

Consider also creating a personal portfolio website to present your work in a polished and accessible way, making it easier for recruiters or collaborators to understand your expertise at a glance. It takes time and effort, I know: I am still keeping my newest site "under construction 🚧" at the time of writing But once you publish it and it looks professional, chances are it will help you gain visibility and interest in you as an machine learning professional.


Wrapping Up

This article has provided key necessary tips and strategies from my perspective for defining and future-proofing an machine learning career, in the direction that best resonates with you. Machine learning is a wide and ever growing and evolving field, where the possibilities of consolidating yourself as an machine learning professional are very diverse, far beyond just becoming an machine learning engineer. Constant learning, getting to know yourself, and letting others know you, are my suggested triad in this endeavor.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Reskilling IT for AI and machine learning environments [Q&A]




As AI and machine learning technologies rapidly evolve, IT professionals must continuously adapt their skills to stay competitive in the workforce. This requires not only technical expertise but also a commitment to lifelong learning, including earning relevant certifications and developing crucial soft skills like communication and adaptability.

Companies can support this growth by fostering a culture of continuous learning, offering reskilling and upskilling opportunities, and providing tailored training paths for their employees. By prioritizing ongoing development, businesses can ensure their workforce remains at the forefront of emerging technologies, preparing them for the challenges of the AI-driven future.

Ornella Casagrande Rizzi, learning and development coordinator at Indicium, an AI and data consultancy, answered a few questions on the topic of reskilling IT for AI and machine learning environments.

BN: How can IT professionals prepare for the rapid evolution of roles caused by the proliferation of AI and machine learning technologies in 2025?

OCR: The IT market is an ever-changing environment that won't stop evolving anytime soon, so it demands professionals who are also in constant motion with their technical skills, being agile and adaptable. The key to success in this scenario lies in a lifelong learning mentality, keeping up to date with new tools and technologies that emerge daily, and also not forgetting to earn certifications that validate these skills, which will make all the difference in meeting the standards that the market requires.

However, mastering the technical field is only half the equation. Soft skills such as communication, creativity, adaptability, and analytical thinking will be major differentiators that help professionals respond quickly to change, and also lead and innovate through it. Professionals who are constantly growing in both hard and soft skills will be ahead of the curve, shaping the future of technology in the market.

In the Lighthouse Program, a training initiative by Indicium Academy, students learn skills across numerous data career paths while also being on a journey to specifically enhance their human skills. Through a rigorous learning path and hands-on challenges, participants develop the versatility needed to excel in a competitive and ever-changing landscape.

BN: What steps can organizations take to ensure their workforce has the skills needed to stay relevant as legacy tech skills become obsolete?

OCR: When analyzing the strategies that successful companies use to keep their workforce relevant, it is clear that they all revolve around the same key factor: lifelong learning. A continuous learning mindset, along with valuing and encouraging employees to pursue continuous professional development, is what companies need to maintain a relevant workforce.

In the upskilling strategy, professionals learn new skills within their career field -- a crucial factor as new technologies emerge in the market, requiring companies to adapt or risk being overtaken by competitors. On the other hand, with reskilling, employees are encouraged to take new steps in their careers while having a clear view of opportunities for internal mobility within the same company. Supporting and promoting these initiatives fosters a culture of continuous learning and knowledge-sharing within the organization. When employees transition to different roles or departments, they bring fresh perspectives to their new teams, naturally creating a flow of knowledge across the company.

At Indicium we have a strong Mentoring Program to welcome new employees. As the new hire settles, the TechTracks Program aims to keep the employee up to date in the skills of their chosen career. Eventually, professionals also have the opportunity to teach their specific data interests in the program. This dynamic approach not only fosters a culture of continuous learning but also builds lasting connections across the company.

BN: How can businesses prioritize upskilling in areas like data analytics, cybersecurity and AI/ML to meet the demands of a competitive and transforming industry?

OCR: To prioritize upskilling, IT companies need strategies to build a culture of continuous learning, regularly evaluating their workforce to find where it needs improvement through Learning Needs Assessment to identify the key skills necessary to stay ahead of industry trends. However, a universal approach won't work, each employee should have a personalized learning path that targets their specific needs, preferences, and learning styles, fostering engagement and reducing dropout rates.

For companies that want to focus on workforce development in 2025, collaborating with specialized educational institutions is also a great strategy for engaging in continuous training which allow the creation of customized internal training programs that can elevate employees skills to the next level. In addition to considering the companies preferences also analyzing the needs of the projects that are being worked on, ensures that the training aligns with both individual goals and organizational objectives.

BN: What role do certifications and formal training programs play in helping IT professionals adapt to emerging technologies?

OCR: Certifications play a crucial role in helping IT professionals succeed in their careers. They can measure how seasoned the professionals are, for example, and how ready they are to move into a senior position or even transfer their knowledge from one project to another, according to the company's needs.

Earning renowned certifications, like the ones from Azure and dbt, as well to participate in formal training programs such as the dbt from A to Z by Indicium Academy, is a field-tested way of validating the knowledge and skills of IT professionals, demonstrating their ability to evolve and learn new technologies while also having the opportunity to learn through a hands-on experience.

We can also highlight how it can help to develop other skills in the profession besides what they are actually being tested on, enhancing their problem-solving capabilities and building confidence while they know that they have earned formal, recognized qualifications, making them more prepared to solve real problems on a day-to-day basis.

BN: How can companies strike a balance between reskilling current employees and recruiting new talent with expertise in emerging technologies like AI and ML?

OCR: Reskill where possible, hire where necessary. Upskilling or reskilling are strategies that companies need to prioritize for numerous reasons: Besides avoiding the risks associated with hiring externally, upskilling/reskilling are excellent ways to foster a culture of learning by effectively teaching employees how to learn and shaping them into lifelong learners. This ensures the staff is better prepared for the dynamic changes brought by emerging technologies while also remaining loyal to their company. Align opportunities for learning with employee career paths, and watch employee engagement and happiness increase significantly.

But when should companies hire externally? To answer this question, the company needs an organized internal process to determine what is needed, whether those capabilities already exist internally, or if it is better and more profitable to bring in someone new.

Also, there's the option of recruiting new talent without specialized knowledge, and training them. It will take some time, but it is cheaper, and if the company does it consistently it won't have to hire new super-skilled people or even spend that much on retaining current employees, which is a long-term really effective strategy.

BN: What are the key challenges IT leaders face in fostering a culture of continuous learning and innovation to address future tech trends?

OCR: Engaging employees while creating optimized and truly effective processes is a significant challenge IT leaders face in fostering a culture of continuous learning and innovation today. Time constraints are a common barrier to effective programs, but these can be overcome with a well-crafted strategy that engages all levels of seniority across departments with a mutual objective.

Essentially, culture is from the top down, so, the leaders must be provided with the confidence to perpetuate the culture with the new employees, and also, make room in the company's calendar for official training sessions, providing the necessary learning resources and pathways that deliver real results for the company and for professional careers.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Wednesday, April 23, 2025

Machine Learning Engineers Bespoke Cas9 Enzymes for Gene Editing





In a new study published in Nature titled, “Custom CRISPR-Cas9 PAM variants via scalable engineering and machine learning,” researchers from Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) have developed a machine learning model that permits the prediction of bespoke Cas9 proteins that are more uniquely suited to specific targets and can be tailored with designer properties for research or therapeutic use. The authors stated that this scalable methodology can engineer and characterize the biochemical properties of hundreds or thousands of novel Cas9 proteins.

“We’re excited to share these enzymes with the genome editing community and to get feedback on their performance as nucleases, base editors, and as other genome editing modalities,” said Benjamin Kleinstiver, PhD, associate professor of pathology at HMS, in an interview with GEN.

To safely and precisely correct mutations that cause genetic diseases, genome editing technology must be programmed to target patient-specific sequences while limiting off-target effects. Cas nucleases recognize genomic targets by reading protospacer adjacent motifs (PAMs), which initiate guide RNA pairing with the target site. In the case of commonly used Streptococcus pyogenes Cas9 (SpCas9), pairing requires the standard PAM sequence, 3’NGG, which inevitably restricts the use of the enzyme to PAM-encoding genomic sequences.

To expand access to the genome, a common engineering strategy is the relaxation of the PAM to allow editing to new sites while retaining activity against NGG, thereby creating generalist enzymes for broad applications. However, enabling efficient on-target editing while minimizing off-target effects for relaxed PAM enzymes remains a challenge.

In the new study, Kleinstiver’s team generated experimental PAM profiles for hundreds of engineered SpCas9 enzymes to train a neural network that related amino-acid sequence to PAM specificity. The resulting PAM machine learning algorithm (PAMmla) predicted the PAMs of some 64 million SpCas9 enzymes. It then identified efficacious and specific enzymes that outperformed existing evolution-based and engineered candidates as nucleases and base editors in human cells while reducing off-target effects.

Among the examples of user-directed Cas9 enzyme design, the MGH group performed selective targeting of the P23H mutation of rhodopsin in human cells and mice. This mutation is a common cause of autosomal dominant retinitis pigmentosa (adRP), a genetic eye disease leading to vision loss.

“We envision that the general framework of scalable engineering, deep characterization, and utilizing machine learning to predict a larger universe of proteins would be extensible to many exciting areas, including other properties of Cas9 enzymes, like target site specificity and on-target activity, to non-CRISPR enzymes entirely,” said Rachel Silverstein, first author of the study and a graduate student at HMS, in an interview with GEN.

Additional applications include extending this engineering workflow to other protein domains in next-generation editors, such as deaminase domains for base editors, reverse transcriptase domains for prime editors, and DNA polymerases for click editors.

Machine learning offers key advantages over traditional experimental engineering strategies, which often struggle with predicting the functional impact of multiple simultaneous mutations in addition to facing laborious and time-consuming experimental selection strategies. The authors stated that computational predictions can screen larger numbers of enzymes bearing more diverse combinations of amino-acid substitutions compared to experimental methods alone, thereby increasing the probability of identifying optimal enzymes across a deeper mutational space.

According to Silverstein, a key to this method is to establish a facile and scalable biochemical assay that can yield rich data about thousands of enzymes in parallel, providing the requisite data to train machine learning models.

“Longer-term, we envision that the use of machine learning can be widely applied to potentiate the activities of genome-editing technologies that will be beneficial for creating a diverse and complete toolbox of technologies,” Silverstein told GEN.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Tuesday, April 22, 2025

AI-powered risk model evaluates long-term risk of coronary artery disease






Researchers have used machine learning to develop an advanced risk model for coronary artery disease (CAD), presenting their findings in Nature Medicine.[1]

The group considered approximately 2,000 factors that may or may not influence a person’s long-term heart health; the list included demographic details, lifestyle choices, medications, genetics and much more. They then explored years of data from the U.K. Biobank database, training an artificial intelligence (AI) model to identify factors that increase an individual’s odds of receiving a diagnosis of CAD later in life.

Once that list of roughly 2,000 predictive features was narrowed down to just 53, it was time to put it to the test. Overall, the group found that their risk model achieved an area under the ROC curve (AUC) of 0.84. When tested on a completely independent patient population, meanwhile, the AUC was 0.81 for predicting an individual’s 10-year risk of CAD. This was seen as an improvement when compared to the clinical scores presently being used by care teams to evaluate their patients.

“I think more precise and personalized risk prediction could motivate patients to engage in early prevention,” senior author Ali Torkamani, PhD, professor and director of genomics and genome informatics at the Scripps Research Translational Institute, said in a statement. “Our model first predicts the risk that a person will develop CAD, and then it provides information to allow personalized intervention.”

“Compared to traditional clinical tools, the new model improved risk classification for approximately one in four individuals — helping to better identify those truly at risk while avoiding unnecessary concern for those who are not,” added first author Shang-Fu “Shaun” Chen, a former doctoral student who worked with Torkamani.

The team behind this advanced risk model hope it can help identify more young and female patients who may face an increased risk of developing CAD. By finding these individuals early, clinicians can work to get out in front of the disease by adapting as necessary.

“We think the most important thing is for patients to be aware of their individual risks so that they can receive the appropriate treatments and make lifestyle changes,” Chen added.

The next step for this research is a long-term clinical test exploring the effectiveness of using this new risk model to improve patient care.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Tuesday, April 15, 2025

Machine Learning Predicts Premature Multimorbidity in Canadian Patients With IBD







Researchers have turned to artificial intelligence to better understand premature multimorbidity outcomes in Canadian patients with inflammatory bowel disease (IBD), including Crohn disease and ulcerative colitis. What they found, according to research published in CMAJ, was that these individuals died prematurely, before age 75 years, which was the primary study outcome.

The researchers conducted a population-based, retrospective cohort study of 9,278 patients (49.3% women) with IBD who died between 2010 and 2020 in Ontario, Canada, using the Ontario Crohn’s and Colitis Cohort algorithms. The researchers then identified 17 chronic conditions present at age of IBD diagnosis using validated algorithms, including asthma, congestive heart failure, chronic obstructive pulmonary disease, diabetes, rheumatoid arthritis, hypertension, and dementia.

Chronic conditions without validated algorithms included myocardial infarction, osteoporosis, cardiac arrythmia, chronic coronary syndrome, stroke, renal failure, osteoarthritis and other arthritis (nonrheumatoid) types, mood disorders, and other mental health disorders. Cancer comorbidity was identified using the Ontario Cancer Registry.

Models were evaluated using accuracy, positive predictive value, sensitivity, F1 scores, area under the receiver operating curve (AUC), calibration plots, and explainability plots. The researchers used 3 supervised machine learning predictive tasks. Task 1 predicted premature death from conditions present at death. Task 2 predicted premature death from conditions developed by age 60 years. Task 3 predicted premature death from normalized age at diagnosis of early-life conditions (before age 60 years).

Of the 9,278 patients with IBD included in the study, 47.2% experienced premature death. The most prevalent conditions at age 60 years (Task 2) included osteoarthritis and other types of arthritis (39%), mood disorders (38.3%), and hypertension (29.5%). Per Task 1, the conditions with highest prevalence at death were osteoarthritis and other types of arthritis (76.8%), hypertension (72.8%), mood disorders (69%), renal failure (49.6%), and cancer (46.1%).

Of note, model performance improved when features included only conditions diagnosed before age 60 years. For Task 1, the “absence of chronic conditions, particularly those commonly developed later in life (e.g., dementia, chronic coronary syndrome, congestive heart failure) was leveraged by our model to predict premature death.”

By contrast, models for Tasks 2 and 3 identified a link between the presence of non-IBD chronic conditions and premature death. Specifically, for Task 3, key predictive features included younger age at diagnosis for mood disorders, osteoarthritis and other types of arthritis, mental health disorders, hypertension, and male sex. Non-IBD comorbidities displayed a strong predictive relationship with premature death (AUC, 0.81-0.95).

“We demonstrated that, among decedents with IBD, machine learning models can accurately predict premature death associated with non-IBD comorbidities, with stronger performance for models trained on early-life conditions (age ≤ 60 yr), suggesting these may be more important in determining one’s health trajectory,” the researchers concluded.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Apple AI stresses privacy with synthetic and anonymised data




Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs.

According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features like email summaries, without gaining access to personal emails or messages.

For users who opt in to Apple’s Device Analytics program, the company’s AI models will compare synthetic email-like messages against a small sample of a real user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple. No actual user data leaves the device, and Apple says it receives only aggregated information.

The technique will allow Apple to improve its models for longer-form text generation tasks without collecting real user content. It’s an extension of the company’s long-standing use of differential privacy, which introduces randomised data into broader datasets to help protect individual identities. Apple has used this method since 2016 to understand use patterns, in line with the company’s safeguarding policies.
Improving Genmoji and other Apple Intelligence features

The company already uses differential privacy to improve features like Genmoji, where it collects general trends about which prompts are most popular without linking any prompt with a specific user or device. In upcoming releases, Apple plans to apply similar methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools.

For Genmoji, the company anonymously polls participating devices to determine whether specific prompt fragments have been seen. Each device responds with a noisy signal – some responses reflect actual use, while others are randomised. The approach ensures that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device, the company says.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member

Wednesday, April 9, 2025

DeepSeek’s AIs: What humans really want





Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions.

In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward Modeling.” It outlines how a new approach outperforms existing methods and how the team “achieved competitive performance” compared to strong public reward models.

The innovation focuses on enhancing how AI systems learn from human preferences – a important aspect of creating more useful and aligned artificial intelligence.

What are AI reward models, and why do they matter?

AI reward models are important components in reinforcement learning for large language models. They provide feedback signals that help guide an AI’s behaviour toward preferred outcomes. In simpler terms, reward models are like digital teachers that help AI understand what humans want from their responses.

“Reward modeling is a process that guides an LLM towards human preferences,” the DeepSeek paper states. Reward modeling becomes important as AI systems get more sophisticated and are deployed in scenarios beyond simple question-answering tasks.

The innovation from DeepSeek addresses the challenge of obtaining accurate reward signals for LLMs in different domains. While current reward models work well for verifiable questions or artificial rules, they struggle in general domains where criteria are more diverse and complex.
The dual approach: How DeepSeek’s method works

DeepSeek’s approach combines two methods:Generative reward modeling (GRM): This approach enables flexibility in different input types and allows for scaling during inference time. Unlike previous scalar or semi-scalar approaches, GRM provides a richer representation of rewards through language.
Self-principled critique tuning (SPCT): A learning method that fosters scalable reward-generation behaviours in GRMs through online reinforcement learning, one that generates principles adaptively.

One of the paper’s authors from Tsinghua University and DeepSeek-AI, Zijun Liu, explained that the combination of methods allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

The approach is particularly valuable for its potential for “inference-time scaling” – improving performance by increasing computational resources during inference rather than just during training.

The researchers found that their methods could achieve better results with increased sampling, letting models generate better rewards with more computing.
Implications for the AI Industry

DeepSeek’s innovation comes at an important time in AI development. The paper states “reinforcement learning (RL) has been widely adopted in post-training for large language models […] at scale,” leading to “remarkable improvements in human value alignment, long-term reasoning, and environment adaptation for LLMs.”

The new approach to reward modelling could have several implications:More accurate AI feedback: By creating better reward models, AI systems can receive more precise feedback about their outputs, leading to improved responses over time.
Increased adaptability: The ability to scale model performance during inference means AI systems can adapt to different computational constraints and requirements.
Broader application: Systems can perform better in a broader range of tasks by improving reward modelling for general domains.
More efficient resource use: The research shows that inference-time scaling with DeepSeek’s method could outperform model size scaling in training time, potentially allowing smaller models to perform comparably to larger ones with appropriate inference-time resources.
DeepSeek’s growing influence

The latest development adds to DeepSeek’s rising profile in global AI. Founded in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made waves with its V3 foundation and R1 reasoning models.

The company upgraded its V3 model (DeepSeek-V3-0324) recently, which the company said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.” DeepSeek has committed to open-source AI, releasing five code repositories in February that allow developers to review and contribute to development.

While speculation continues about the potential release of DeepSeek-R2 (the successor to R1) – Reuters has speculated on possible release dates – DeepSeek has not commented in its official channels.

What’s next for AI reward models?

According to the researchers, DeepSeek intends to make the GRM models open-source, although no specific timeline has been provided. Open-sourcing will accelerate progress in the field by allowing broader experimentation with reward models.

As reinforcement learning continues to play an important role in AI development, advances in reward modelling like those in DeepSeek and Tsinghua University’s work will likely have an impact on the abilities and behaviour of AI systems.

Work on AI reward models demonstrates that innovations in how and when models learn can be as important increasing their size. By focusing on feedback quality and scalability, DeepSeek addresses one of the fundamental challenges to creating AI that understands and aligns with human preferences better.



Website: International Research Awards on Computer Vision #computervision #deeplearning #machinelearning #artificialintelligence #neuralnetworks,  #imageprocessing #objectdetection #imagerecognition #faceRecognition #augmentedreality #robotics #techtrends #3Dvision #professor #doctor #institute #sciencefather #researchawards #machinevision #visiontechnology #smartvision #patternrecognition #imageanalysis #semanticsegmentation #visualcomputing #datascience #techinnovation #university #lecture #biomedical

Visit Our Website : computer.scifat.com Nomination Link : computer-vision-conferences.scifat.com/award-nomination Registration Link : computer-vision-conferences.scifat.com/award-registration Member Link : computer-vision-conferences.scifat.com/conference-membership/? ecategory=Membership&rcategory=Member