Open in app

Sign In

Write

Sign In

Sean Narenthiran
Sean Narenthiran

158 Followers

Home

About

Published in

PyTorch Lightning Developer Blog

·Nov 29, 2022

Lightning Transformers 0.2 — New 🤗Tasks, Community Features, and Big Model Training & Inference

Pairing 🤗 Transformers and Lightning has become increasingly popular, leveraging Lightning to hide away the boilerplate of your training code, whilst training using the extensive models and datasets library that Transformers provides. Today we’re announcing Lightning Transformers 0.2 packed with new features, including the new Vision Transformers Image Classification Task…

Machine Learning

3 min read

Lightning Transformers 0.2 — New 🤗Tasks, Community Features and Big Model Training & Inference
Lightning Transformers 0.2 — New 🤗Tasks, Community Features and Big Model Training & Inference
Machine Learning

3 min read


Published in

PyTorch Lightning Developer Blog

·Dec 9, 2021

Part I: Simplifying Transformer Research with xFormers & Lightning

Recently we’ve seen a large growth in variations of the Transformer model (Efficient Transformers: A Survey). However, leveraging improved variations requires custom complicated implementations hidden in a variety of dense libraries. …

Transformers

5 min read

Part I: Simplifying Transformer Research with xFormers & Lightning
Part I: Simplifying Transformer Research with xFormers & Lightning
Transformers

5 min read


Published in

PyTorch Lightning Developer Blog

·Sep 21, 2021

Leverage Sparsity for Faster Inference with Lightning Flash and SparseML

SparseML brings GPU inference speeds to the CPU. This means substantial cost-saving, efficiency, and more options when it comes to deployability. …

Machine Learning

5 min read

Leverage Sparsity for Faster Inference with Lightning Flash and SparseML
Leverage Sparsity for Faster Inference with Lightning Flash and SparseML
Machine Learning

5 min read


Published in

PyTorch Lightning Developer Blog

·Aug 26, 2021

Fine-tune Transformers Faster with Lightning Flash and Torch ORT

Torch ORT uses the ONNX Runtime to improve training and inference times for PyTorch models. — With Lightning Flash, all it takes is enable_ort=Trueto use Torch ORT when training Transformer based models, giving you the power to use all features Lightning provides, such as Callbacks, Logging, Mixed Precision, and Distributed Training with support for Advanced Distributed Plugins.

Machine Learning

4 min read

Fine-tune Transformers Faster with Lightning Flash and Torch ORT
Fine-tune Transformers Faster with Lightning Flash and Torch ORT
Machine Learning

4 min read


Published in

PyTorch Lightning Developer Blog

·Jul 26, 2021

Fine-tuning Wav2Vec for Speech Recognition with Lightning Flash

As a result of our recent Lightning Flash Taskathon, we introduced a new fine-tuning task backed by HuggingFace Wav2Vec, powered by PyTorch Lightning. Wav2Vec 2.0 is a popular semi-supervised audio model that has shown impressive results when fine-tuned to downstream tasks, such as Speech Recognition. Wav2Vec has achieved State-of-the-Art Word…

Pytorch Lightning

4 min read

Fine-tuning Wav2Vec for Speech Recognition with Lightning Flash
Fine-tuning Wav2Vec for Speech Recognition with Lightning Flash
Pytorch Lightning

4 min read


Published in

PyTorch

·Dec 10, 2020

Introducing PyTorch Lightning Sharded: Train SOTA Models, With Half The Memory

Lightning 1.1 reveals Sharded Training — train deep learning models on multiple GPUs saving over 50% on memory, with no performance loss or code change required! — In a recent collaboration with Facebook AI’s FairScale team and PyTorch Lightning, we’re bringing you 50% memory reduction across all your models. Our goal at PyTorch Lightning is to make recent advancements in the field accessible to all researchers, especially when it comes to performance optimizations. …

Pytorch

6 min read

Introducing PyTorch Lightning Sharded: Train SOTA Models, With Half The Memory
Introducing PyTorch Lightning Sharded: Train SOTA Models, With Half The Memory
Pytorch

6 min read


Published in

Towards Data Science

·Oct 28, 2020

Train Conversational AI in 3 lines of code with NeMo and Lightning

Train state-of-the-art speech recognition, NLP and TTS models at scale with NeMo and Lightning — NeMo (Neural Modules) is a powerful framework from NVIDIA, built for easy training, building and manipulating of state-of-the-art conversational AI models. NeMo models can be trained on multi-GPU and multi-node, with or without Mixed Precision, in just 3 lines of code. …

Pytorch

5 min read

Train Conversational AI in 3 lines of code with NeMo and Lightning
Train Conversational AI in 3 lines of code with NeMo and Lightning
Pytorch

5 min read


Published in

PyTorch

·Aug 10, 2020

Training DeepSpeech using TorchElastic

Reduce cost and horizontally scale deepspeech.pytorch using TorchElastic with Kubernetes. End-to-End Speech To Text Models Using Deepspeech.pytorch Deepspeech.pytorch provides training, evaluation and inference of End-to-End (E2E) speech to text models, in particular the highly popularised DeepSpeech2 architecture. Deepspeech.pytorch was developed to provide users the flexibility and simplicity to scale, train and deploy their own speech recognition models…

Deep Speech

4 min read

Training DeepSpeech using TorchElastic
Training DeepSpeech using TorchElastic
Deep Speech

4 min read


Apr 20, 2020

CORD-19-ANN: Semantic Search Engine Using S-BERT

Allen Institute as a part of their open research efforts released a data dump of scholarly articles as an initiative to aid efforts in tackling COVID-19. This dataset contains 51,000 articles as of the date this article being written and is increasing in size. When searching the data, key word…

Co Vid

6 min read

CORD-19-ANN: Semantic Search Engine Using S-BERT
CORD-19-ANN: Semantic Search Engine Using S-BERT
Co Vid

6 min read


Jun 13, 2019

Scaling DeepSpeech using Mixed Precision and KubeFlow

Over the past few years at Digital Reasoning we have been developing audio analytics software to be highly effective at processing the noisy, domain-specific voice data that we typically encounter within the trading operations of major banks. Within the Audio Research Team, rapid research cycles drive continual refinements to our…

Machine Learning

7 min read

Scaling DeepSpeech using Mixed Precision and KubeFlow
Scaling DeepSpeech using Mixed Precision and KubeFlow
Machine Learning

7 min read

Sean Narenthiran

Sean Narenthiran

158 Followers

Research Engineer at Grid AI | Pytorch Lightning

Following
  • Jimmy Whitaker

    Jimmy Whitaker

  • Ethan Harris

    Ethan Harris

  • Bartley Richardson

    Bartley Richardson

  • Benjamin Lefaudeux

    Benjamin Lefaudeux

See all (7)

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams