Dr Dany Varghese
About
My research project
one-shot meta interpretive learningI am currently pursuing a PhD in the area of Computer Vision which incorporates the advantages of one-shot meta interpretive learning in the domain of plant disease detection.
Previously, I was worked as Assistant Professor, Department of Computer Science & Engineering, Jyothi Engineering College(NACC & NBA Accredited), Kerala, India. My research interest lies in the area of Image Processing & Machine Learning.
Supervisors
I am currently pursuing a PhD in the area of Computer Vision which incorporates the advantages of one-shot meta interpretive learning in the domain of plant disease detection.
Previously, I was worked as Assistant Professor, Department of Computer Science & Engineering, Jyothi Engineering College(NACC & NBA Accredited), Kerala, India. My research interest lies in the area of Image Processing & Machine Learning.
ResearchResearch interests
Humanity is facing a great challenge to feed the growing population of 7.7 billion people and food security remains threatened by a number of factors including new plant diseases. Moreover, the excessive use of chemicals (to fight the plant diseases) has led to the adverse effects on agro-ecosystem. Presently there is an immediate need for early and precise diagnostic techniques to control the plant diseases for the sustainability of the ecosystem. The state-of-the-art algorithm for this problem requires building a new model from a very large number of previous cases and there is
currently no algorithm that could learn an accurate model only from a new case, e.g. a single image.
Proposed Method:
We develop a new framework called One-Shot Meta-Interpretive Learning (OSMIL) for the problem
of plant diseases detection from a single image. MIL has been already used in a new computer
vision framework called Logical Vision (LV). Logical Vision (LV) was shown to overcome some of
the limitations of statistical-based algorithms. LV first uses background knowledge on symbols
to guide the sampling of low-level features like pixel value, shape, edge, colour and then it uses the
sampled results to revise previously conjectured mid-level symbols. With the extracted mid-level
feature symbols as background knowledge, a generalized MIL setting is used to learn high-level
visual concepts. This will enhance the constructive paradigm of LV through its ability to learn
recursive theories, inventing predicates and learning from a single example.
Research interests
Humanity is facing a great challenge to feed the growing population of 7.7 billion people and food security remains threatened by a number of factors including new plant diseases. Moreover, the excessive use of chemicals (to fight the plant diseases) has led to the adverse effects on agro-ecosystem. Presently there is an immediate need for early and precise diagnostic techniques to control the plant diseases for the sustainability of the ecosystem. The state-of-the-art algorithm for this problem requires building a new model from a very large number of previous cases and there is
currently no algorithm that could learn an accurate model only from a new case, e.g. a single image.
Proposed Method:
We develop a new framework called One-Shot Meta-Interpretive Learning (OSMIL) for the problem
of plant diseases detection from a single image. MIL has been already used in a new computer
vision framework called Logical Vision (LV). Logical Vision (LV) was shown to overcome some of
the limitations of statistical-based algorithms. LV first uses background knowledge on symbols
to guide the sampling of low-level features like pixel value, shape, edge, colour and then it uses the
sampled results to revise previously conjectured mid-level symbols. With the extracted mid-level
feature symbols as background knowledge, a generalized MIL setting is used to learn high-level
visual concepts. This will enhance the constructive paradigm of LV through its ability to learn
recursive theories, inventing predicates and learning from a single example.
Teaching
Currently, I am working as lab coordinator for the course Data Mining & Machine Learning lab.
I also worked as an assistant professor at Jyothi Engineering College, India. I taught different subjects like;
- Theory of Computation
- Compiler Design(PG, UG)
- Advanced Data Structures(PG)
- Graph Theory & Combinatorics
- Logic for Computer Science
- Operating Systems
- C, Python
Publications
—Cognitive computing is an emerging method which helps to analyse the human brain behaviour and simulate it mathematically. Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own. Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology. Cognitive computing helps autonomous systems to work as human Brain. COMPASS is a simulator which simulates the working of cognitive computing. It is fully based on the architecture TrueNorth developed by IBM. COMPASS enables to simulate brain-like functions in a hardware platform.
Images with high resolution are always a necessity in
almost all image processing applications. .Super Resolution is a
method in image processing to create High Resolution image
from several or single low resolution image so that high spatial
frequency information can be recovered. SR methods are applied
on LR images in order to increase spatial resolution for a new
image. The super resolution processing includes two main tasks:
up-sampling of the image, removing degradations that arise
during the image capture. In effect, the super-resolution process
tries to generate the missing high frequency components.
Applications may include HDTV, biological imaging etc.
In this work we deal the problem of producing a HR image
from a single low-resolution image using some statistical
mathematical model. Performance of these algorithms was
checked by using objective image quality criteria PSNR, MSSIM
and compared with other existing methods.
The general focus of domain adaptation methodology is transferring learned knowledge from labeled train domain to unlabeled test domain. Domain adaptation tries to minimize the domain shift problem by modeling a classifier using labeled training domain data which taken under definite conditions and this classifier is utilize to test the data which taken under distinct conditions. Common adaptation approaches will learn a freshly acquired feature vector space using labeled data domain (source) and unlabeled train (target) data domain having alike characteristics and a supervised, unsupervised or semi-supervised classifier will carry out the further task. Here is a design of an incremental KM-ELM classifier which can utilize for better classification of various domain adaptation task. This classifier is a fusion of high performing K-Means algorithm and fast neural network Extreme learning machine (ELM). Here utilizes the cross-domain learning capability of ELM with PCA, GFK (Geodesic flow Kernel) methods for addressing domain adaptation task. First PCA and PLS are used to create the subspaces of testing data and training data and these subspaces will considered as a points in Grassmann manifold. After that the geodesic based domain shift representation will carry out and integration of these data points creates the intermediate cross domain. This will form a new space having feature vectors from training domain and testing domain where the likelihood of these vectors in this space is maximum.
Alzheimer's disease (AD) is one of the most intensifying brain disorder that gradually damage memory and thinking skills and later the ability to carry out the normal tasks. It is the most common cause of dementia in older adults. While dementia is more common as people grow older, it is not a normal part of aging. One of the first signs of Alzheimer's disease is memory loss. AD accounts for up to 80% of cases of dementia. The 3 stages of AD is mild, moderate and severe AD. In mild cognitive impairment (MCI), the loss of cognitive skills only slightly affects a person's daily life, moderate stage is the middle stage of AD. While in severe AD, a person is no longer able to function independently and becomes totally reliant on others for care. In this paper, Support Vector Machine (SVM) is used for diagnosing Alzheimer's disease of brain MRI and for classifying it into specific stages. The algorithm was trained and tested using the MRI data from Alzheimer's Disease Neuroimaging Initiative (ADNI). The data used include the MRI scanning of about 70 AD patients and 30 normal controls.
Unlike most of computer vision approaches which dependon hundreds or thousands of training images, humans can typically learnfrom a single visual example. Humans achieve this ability using back-ground knowledge. Rule-based machine learning approaches such as In-ductive Logic Programming (ILP) provide a framework for incorporatingdomain specific background knowledge. These approaches have the po-tential for human-like learning from small data or even one-shot learning,i.e. learning from a single positive example. By contrast, statistics basedcomputer vision algorithms, including Deep Learning, have no generalmechanisms for incorporating background knowledge. In this paper, wepresent an approach for one-shot rule learning called One-Shot Hypoth-esis Derivation (OSHD) which is based on using a logic program declar-ative bias. We apply this approach to the challenging task of Malayalamcharacter recognition. This is a challenging task due to spherical andcomplex structure of Malayalam hand-written language. Unlike for otherlanguages, there is currently no efficient algorithm for Malayalam hand-written recognition. We compare our results with a state-of-the-art DeepLearning approach, called Siamese Network, which has been developedfor one-shot learning. The results suggest that our approach can gener-ate human-understandable rules and also outperforms the deep learningapproach with a significantly higher average predictive accuracy.
Unlike most computer vision approaches, which depend on hundreds or thousands of training images, humans can typically learn from a single visual example. Humans achieve this ability using background knowledge. Rule-based machine learning approaches such as Inductive Logic Programming (ILP) provide a framework for incorporating domain specific background knowledge. These approaches have the potential for human-like learning from small data or even one-shot learning, i.e. learning from a single positive example. By contrast, statistics based computer vision algorithms, including Deep Learning, have no general mechanisms for incorporating background knowledge. This paper presents an approach for one-shot rule learning called One-Shot Hypothesis Derivation (OSHD) based on using a logic program declarative bias. We apply this approach to two challenging human-like computer vision tasks: 1) Malayalam character recognition and 2) neurological diagnosis using retinal images. We compare our results with a state-of-the-art Deep Learning approach, called Siamese Network, developed for oneshot learning. The results suggest that our approach can generate humanunderstandable rules and outperforms the deep learning approach with a significantly higher average predictive accuracy
Plant diseases are one of the main causes of crop loss in agriculture. Machine Learning, in particular statistical and neural nets (NNs) approaches, have been used to help farmers identify plant diseases. However, since new diseases continue to appear in agriculture due to climate change and other factors, we need more data-efficient approaches to identify and classify new diseases as early as possible. Even though statistical machine learning approaches and neural nets have demonstrated state-of-the-art results on many classification tasks, they usually require a large amount of training data. This may not be available for emergent plant diseases. So, data-efficient approaches are essential for an early and precise diagnosis of new plant diseases and necessary to prevent the disease’s spread. This study explores a data-efficient Inductive Logic Programming (ILP) approach for plant disease classification. We compare some ILP algorithms (including our new implementation, PyGol) with several statistical and neural-net based machine learning algorithms on the task of tomato plant disease classification with varying sizes of training data set (6, 10, 50 and 100 training images per disease class). The results suggest that ILP outperforms other learning algorithms and this is more evident when fewer training data are available.