Computers learn how to create drugs of the future

The key role of computer technology in the fine-tuning of drug development and design will be considered by Professor Stephen Muggleton of Imperial College, London in his inaugural lecture, Models of Mind and Models of Body, today.

The new Professor of Bioinformatics in the Department of Computing will focus on how machine learning and logic programming can reduce the high costs of drug development in the pharmaceutical industry.

The pharmaceutical industry is increasingly overwhelmed by a large volumes of data generated both internally as a result of screening tests and combinatorial chemistry, and externally from sources such as the Human Genome Project. The majority of drug development is dependant on sifting through this information and using it to identify slight improvements in variants of patented active drugs.

Applying inductive logic programming (ILP), a research area formed at the intersection of machine learning and logic programming, Professor Muggleton and his team have shown that it is possible to construct rules that accurately predict the activity of untried drugs.

“Research and development in the pharmaceutical industry involves laboratories of chemists synthesising and testing hundreds of compounds often at great expense,” said Professor Muggleton

“It is now possible to construct rules that predict whether drugs will work from examples of drugs with known medicinal activity. The accuracy of the rules has been shown to be slightly higher than traditional statistical methods used in drug development.”

Recent research successes in the Computational Bioinformatics Laboratory led by Professor Muggleton include a collaboration with the pharmaceutical company SmithKline Beecham (now GlaxoSmithKline) that has yielded a machine-learning model that can identify novel neuropeptides at over 100 times the rate of the GSK in-house model. Working with the Universities of Manchester and Aberystwyth, researchers led by Professor Muggleton have developed a system that automatically suggests experiments for determining the function of genes in yeast, an important model organism in biological research.

“During the 21st century, it is already clear that computers will play an increasingly central role in supporting the fundamental formulation and testing of scientific hypotheses. The automatic construction and testing of hypotheses and their eventual incorporation into accepted knowledge-bases will require an ability to handle incomplete, incorrect and imprecise information,” he added.

Media Contact

Judith H Moore alphagalileo

All latest news from the category: Health and Medicine

This subject area encompasses research and studies in the field of human medicine.

Among the wide-ranging list of topics covered here are anesthesiology, anatomy, surgery, human genetics, hygiene and environmental medicine, internal medicine, neurology, pharmacology, physiology, urology and dental medicine.

Back to home

Comments (0)

Write a comment

Newest articles

Enhancing the workhorse

Artificial intelligence, hardware innovations boost confocal microscope’s performance. Since artificial intelligence pioneer Marvin Minsky patented the principle of confocal microscopy in 1957, it has become the workhorse standard in life…

In the quantum realm, not even time flows as you might expect

New study shows the boundary between time moving forward and backward may blur in quantum mechanics. A team of physicists at the Universities of Bristol, Vienna, the Balearic Islands and…

Hubble Spots a Swift Stellar Jet in Running Man Nebula

A jet from a newly formed star flares into the shining depths of reflection nebula NGC 1977 in this Hubble image. The jet (the orange object at the bottom center…

Partners & Sponsors