Things that I like
Send me your nerdy stuff @maedeh_dehghan
Show more- Subscribers
- Post coverage
- ER - engagement ratio
Data loading in progress...
Data loading in progress...
Today we’re going to talk about how computers understand speech and speak themselves. As computers play an increasing role in our daily lives there has been an growing demand for voice user interfaces, but speech is also terribly complicated. Vocabularies are diverse, sentence structures can often dictate the meaning of certain words, and computers also have to deal with accents, mispronunciations, and many common linguistic faux pas. The field of Natural Language Processing, or NLP, attempts to solve these problems, with a number of techniques we’ll discuss today. And even though our virtual assistants like Siri, Alexa, Google Home, Bixby, and Cortana have come a long way from the first speech processing and synthesis models, there is still much room for improvement. Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios Want to know more about Carrie Anne?
https://about.me/carrieannephilbinThe Latest from PBS Digital Studios:
https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZVWant to find Crash Course elsewhere on the internet? Facebook -
https://www.facebook.com/YouTubeCrash...Twitter - http://www.twitter.com/TheCrashCourse Tumblr - http://thecrashcourse.tumblr.com Support Crash Course on Patreon: http://patreon.com/crashcourse CC Kids: http://www.youtube.com/crashcoursekids
Technovation 2016 Winner Jennifer John introduces Dan Jurafsky, Professor of Linguistics and Computer Science at Stanford University. Dan explains how natural language processing is transforming the way we interact with the world and understand ourselves.
Compiler Explorer is an interactive online compiler which shows the assembly output of compiled C++, Rust, Go (and many more) code.
Copyright ©2015 Emmeline May and Blue Seat Studios Non-commercial use: Video must have copyright information displayed below video, with a live link to original. No alteration to the video may be made, other than translation. Commercial use: Contact [email protected] for licensing. Script - Rockstar Dinosaur Pirate Princess ... Animation - Rachel Brian ... VO - Graham Wheeler http://rockstardinosaurpirateprincess.com/2015/03/02/consent-not-actually-that-complicated/ http://www.blueseatstudios.com/
This blog post should help you understand the major differences considering GCC Vs Clang. Both are excellent software but there are differences to discuss.
Let’s take a brief trip back to our school years and recall some lessons in mathematics and physics. Do you remember what the number π equals? And what is π squared? That’s a strange question too. Of course, it’s 9.87. And do you remember the value of the acceleration due to gravity, g? Of course, that number was drilled into our memory so thoroughly that it’s impossible to forget: 9.81 m/s². Naturally, it can vary, but for solving basic school problems, we typically used this value.
Neural networks have become increasingly impressive in recent years, but there's a big catch: we don't really know what they are doing. We give them data and ways to get feedback, and somehow, they learn all kinds of tasks. It would be really useful, especially for safety purposes, to understand what they have learned and how they work after they've been trained. The ultimate goal is not only to understand in broad strokes what they're doing but to precisely reverse engineer the algorithms encoded in their parameters. This is the ambitious goal of mechanistic interpretability. As an introduction to this field, we show how researchers have been able to partly reverse-engineer how InceptionV1, a convolutional neural network, recognizes images. ▀▀▀▀▀▀▀▀▀SOURCES & READINGS ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you: On Interpreting InceptionV1: Feature visualization:
https://distill.pub/2017/feature-visualization/Zoom in: An Introduction to Circuits:
https://distill.pub/2020/circuits/zoom-in/The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does:
https://distill.pub/2020/circuits/OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail:
https://microscope.openai.com/modelsHere's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1:
https://microscope.openai.com/models/inceptionv1/mixed3b_0?models.op.feature_vis.type=channel&models.op.technique=feature_visActivation atlases:
https://distill.pub/2019/activation-atlas/More recent work applying SAEs to InceptionV1:
https://arxiv.org/abs/2406.03662v1Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers:
https://transformer-circuits.pub/In the video, we cite "Toy Models of Superposition":
https://transformer-circuits.pub/2022/toy_model/index.htmlWe also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning":
https://transformer-circuits.pub/2023/monosemantic-features/More recent progress: Mapping the Mind of a Large Language Model: Press:
https://www.anthropic.com/research/mapping-mind-language-modelPaper in the transformers circuits thread:
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.htmlExtracting Concepts from GPT-4: Press:
https://openai.com/index/extracting-concepts-from-gpt-4/Paper:
https://arxiv.org/abs/2406.04093Browse features:
https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.htmlLanguage models can explain neurons in language models (cited in the video): Press:
https://openai.com/index/language-models-can-explain-neurons-in-language-models/Paper:
https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.htmlView neurons:
https://openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.htmlNeel Nanda on how to get started with Mechanistic Interpretability: Concrete Steps to Get Started in Transformer Mechanistic Interpretability:
https://www.neelnanda.io/mechanistic-interpretability/getting-startedMechanistic Interpretability Quickstart Guide:
https://www.neelnanda.io/mechanistic-interpretability/quickstart200 Concrete Open Problems in Mechanistic Interpretability:
https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretabilityMore work mentioned in the video: Progress measures for grokking via mechanistic interpretability:
https://arxiv.org/abs/2301.05217Discovering Latent Knowledge in Language Models Without Supervision:
https://arxiv.org/abs/2212.03827Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning:
https://www.nature.com/articles/s41551-018-0195-0▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ 🟠 Patreon:
https://www.pa…Transformer based models like LLMs have demonstrated remarkable prowess in natural language processing tasks. However, their limitations…
Your current plan allows analytics for only 5 channels. To get more, please choose a different plan.