Multitask Machine Learning
Our client is a major international pharmaceutical company, which conducts research and development activities related to a wide range of human medical disorders, including mental illness, neurological disorders, cancer, and other disorders.
For every pharmaceutical company, it is vitally important to maintain its drug discovery pipeline—a set of drug candidates under development. The search of drug candidates is extremely expensive, as it implies thousands of experiments to find substances having desired effect on biotargets.
Our client set us the task to deliver a very fast and highly scalable data pipeline that uses different machine learning algorithms to learn and predict chemical compound activity to reduce the number of real experiments.
Among other tasks (see case study ‘Machine Learning for Biochemistry’ for full description), we were to develop a solution maintaining multi-task learning, i.e. ability of the AI to solve several learning tasks at the same time and to exploit commonalities and differences across tasks. This enables more efficient learning due to efficient resource use and positive impact of tasks on each other.
Our ML models support multitasking. This means, the target tensor is a 2D matrix and it contains different tasks (e.g. several biological targets). For classification we also support multilabel and multitask target tensors containing multiple labels for each of the tasks.
Also, we created a framework which allows to speed up the multitask calculations and improve model performance using several modern approaches:
- Task Affinity Grouping allows to improve model accuracy and reduce the number of tasks by finding the groups of tasks which have positive impact on each other.
- AdaShare (adaptive sharing approach) utilizes parameter sharing between tasks. I.e., the method defines, how to decide ‘what to share across which tasks to achieve the best recognition accuracy, while taking resource efficiency into account’
- L-DRO (lookahead distributionally robust optimization) tries to improve the task with the worst accuracy using the game theory.
- PCGrad operates gradients and improves the tasks with conflicting gradients.
- MolTSE (Molecular Tasks Similarity Estimator) ‘projects individual tasks into a latent space and measures the distance between the embedded vectors to derive the task similarity estimation and thus enhance the molecular prediction results’.
We improved some original algorithms to deal with the number of tasks (over 500) to be able to solve the problem in a reasonable time. In our generic framework users can use their arbitrary neural networks for binary and multilabel classification problems and for regression problems.
Related CasesRead all
Implementing LTI 1.3 for LMS
Implementation of the latest version of the standard, LTI 1.3 and in particular LTI Advantage.
OneRoster 1.2 Integration for LMS
A solution for passing grade information from the LMS to a student information system (SIS)
Online Configurator of Balcony Structures
Develoment of online portal for automatic calculation of project cost based on multiple parameters.