Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 2
Published in Responsible Computer Vision workshop, CVPR, 2021
Rakshit Naidu, Ankita Ghosh, Yash Maurya, Shamanth R Nayak K, Soumya Snigdha Kundu
Download here
Published in Machine Learning for Creativity and Design workshop, NeurIPS, 2021
Harsh Rathod, Manisimha Varma, Parna Chowdhury, Sameer Saxena, V Manushree, Ankita Ghosh, Sahil Khose
Download here
Published in Tackling Climate Change with Machine Learning workshop, NeurIPS, 2021
Ankita Ghosh, Sahil Khose, Abhiraj Tiwari
Download here
Published in 2023 IEEE 20th India Council International Conference (INDICON), 2024
Sahil Khose, Ankita Ghosh, Yogish S. Kamath, Neetha I. R. Kuzhuppilly, J. R. Harish Kumar
Download here
Published in 2023 IEEE 20th India Council International Conference (INDICON), 2024
Ankita Ghosh, Sahil Khose, Yogish S. Kamath, Neetha I. R. Kuzhuppilly, J. R. Harish Kumar
Download here
Published:
Zero-shot learning (ZSL) has attracted significant attention due to its capabilities of classifying new images from unseen classes. In this paper, we propose to address a new and challenging task, namely explainable zero-shot learning (XZSL), which aims to generate visual and textual explanations to support the classification decision. Link for the video
Published:
Paper presentation and discussion of - Semi-Supervised Classification and Segmentation on High Resolution Aerial Images - by the authors: Sahil Khose, Abhiraj Tiwari and Ankita Ghosh Link for the video
Published:
After dominating Natural Language Processing, Transformers have taken over Computer Vision recently with the advent of Vision Transformers. However, the attention mechanism’s quadratic complexity in the number of tokens means that Transformers do not scale well to high-resolution images. XCiT is a new Transformer architecture, containing XCA, a transposed version of attention, reducing the complexity from quadratic to linear, and at least on image data, it appears to perform on par with other models. What does this mean for the field? Is this even a transformer? What really matters in deep learning? Link for the video
Published:
Deep Mind’s breakthrough paper: ‘Contrastive Predictive Coding 2.0’ (CPC 2) CPC 2 not only crushes AlexNet ‘s scores of 59.3% and 81.8% Top-1 and Top-5 accuracies with just 2% of the ImageNet data (60.4% and 83.9%) but given just 1% of the ImageNet data it achieves 78.3% Top-5 acc outperforming supervised classifier trained on 5x more data! Continuing with training on all the available images (100%) it not just outperforms fully supervised systems by 3.2% (Top-1 acc) but it still manages to outperform these supervised models with just 50% of the ImageNet data! Link for the video
Published:
This paper introduces SimCLRv2 and shows that semi-supervised learning is strongly benefited by self-supervised pre-training. Stunningly it shows that bigger models yield larger gains when fine-tuning with fewer labelled examples. Link for the video
Published:
This paper shows how tiny episodic memories are helpful in continual learning, and show that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it! Link for the video
Published:
This paper generates high-resolution bird images from text descriptions for the first time using a multi-staged GAN approach, beating SOTA results on 3 datasets. Link for the video
Published:
This paper is the first to implement convolution NN and self-attention for the question-answering task and break the human benchmark on one of the most complex tasks in NLP. Link for the video
Published:
Instead of transferring knowledge from a teacher model to a different student model, self-distillation does it in the same model! This approach leads to faster inference and smaller models! Link for the video
Published:
This paper introduces the instruction tuning method to train their model FLAN (Finetuned LAnguage Net) in a zero-shot fashion, which is able to beat GPT-3’s zero-shot benchmarks on 19 of 25 tasks and even outperforms few-shot GPT-3 benchmarks on 10 of their tasks! Link for the video
Published:
This paper addresses one of the major challenges of Zero-Shot Object Detection - reducing ambiguity between the background class and unseen classes. With their method, they are able to get SOTA results for ZSD! Link for the video