cookie

We use cookies to improve your browsing experience. By clicking «Accept all», you agree to the use of cookies.

avatar

Things that I like

Send me your nerdy stuff @maedeh_dehghan

Show more
Advertising posts
294
Subscribers
+224 hours
No data7 days
+2330 days

Data loading in progress...

Subscriber growth rate

Data loading in progress...

پردازش زبان طبیعی در ۱۱ دقیقه: https://www.youtube.com/watch?v=fOvTtapxa9c
Show all...
Natural Language Processing: Crash Course Computer Science #36

Today we’re going to talk about how computers understand speech and speak themselves. As computers play an increasing role in our daily lives there has been an growing demand for voice user interfaces, but speech is also terribly complicated. Vocabularies are diverse, sentence structures can often dictate the meaning of certain words, and computers also have to deal with accents, mispronunciations, and many common linguistic faux pas. The field of Natural Language Processing, or NLP, attempts to solve these problems, with a number of techniques we’ll discuss today. And even though our virtual assistants like Siri, Alexa, Google Home, Bixby, and Cortana have come a long way from the first speech processing and synthesis models, there is still much room for improvement. Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios Want to know more about Carrie Anne?

https://about.me/carrieannephilbin

The Latest from PBS Digital Studios:

https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV

Want to find Crash Course elsewhere on the internet? Facebook -

https://www.facebook.com/YouTubeCrash...

Twitter - http://www.twitter.com/TheCrashCourse Tumblr - http://thecrashcourse.tumblr.com Support Crash Course on Patreon: http://patreon.com/crashcourse CC Kids: http://www.youtube.com/crashcoursekids

3🔥 2
کلمات و معنی اونها... https://www.youtube.com/watch?v=QIdB6M5WdkI
Show all...
Dan Jurafsky on Natural Language Processing

Technovation 2016 Winner Jennifer John introduces Dan Jurafsky, Professor of Linguistics and Computer Science at Stanford University. Dan explains how natural language processing is transforming the way we interact with the world and understand ourselves.

2
کامپایلر بازی: https://godbolt.org/
Show all...
Compiler Explorer

Compiler Explorer is an interactive online compiler which shows the assembly output of compiled C++, Rust, Go (and many more) code.

🔥 5 1
Repost from Geek Alerts
Photo unavailableShow in Telegram
امروز، ۹ سپتامبر، سال‌روز تولد دنیس ریچی است. دنیس مک‌آلیستر ریچی، دانشمند کامپیوتر آمریکایی بود که بیشتر به عنوان خالق زبان برنامه‌نویسی C و مشارکت‌های زیادش در توسعه و خلق سیستم‌عامل یونیکس به همراه کن تامسون، شناخته می‌شه. ریچی و تامسون در سال ۱۹۸۳ جایزه تورینگ که ارزشمندترین جایزه در حوزه علوم کامپیوتر هست رو به دلیل پیاده‌سازی یونیکس می‌گیرن. دنیس ریچی همچنین در سال ۱۹۹۹ مدال ملی فناوری رو توسط رییس‌جمهور وقت آمریکا، کلینتون دریافت می‌کنه. جسد ریچی در ۱۲م اکتبر ۲۰۱۱ در سن هفتادسالگی‌اش در خونه‌اش که به تنهایی در اون زندگی می‌کرد پیدا شد. هیچ‌گاه زمان دقیق مرگ دنیس مشخص نشد. اعلام فوت ریچی یک هفته بعد از مرگ استیو جابز بود اما پوشش رسانه‌ای قابل توجه‌ای در مقایسه با جابز براش ایجاد نشد. امروز ۸۳مین سال‌روز تولد دنیس هست. بدون مشارکت‌های او، احتمالاً هیچ کدوم از ما نمی‌تونستیم به شکل کنونی از کامپیوترها، نرم‌افزارهای پیچیده یا حتی اینترنت مدرن استفاده کنیم. https://en.wikipedia.org/wiki/Dennis_Ritchie hadi @geekalerts
Show all...
10👍 1🤩 1
Show all...
Tea Consent (Clean)

Copyright ©2015 Emmeline May and Blue Seat Studios Non-commercial use: Video must have copyright information displayed below video, with a live link to original. No alteration to the video may be made, other than translation. Commercial use: Contact [email protected] for licensing. Script - Rockstar Dinosaur Pirate Princess ... Animation - Rachel Brian ... VO - Graham Wheeler http://rockstardinosaurpirateprincess.com/2015/03/02/consent-not-actually-that-complicated/ http://www.blueseatstudios.com/

👍 6 3
Show all...
GCC vs Clang: Battle of the Behemoths - Incredibuild

This blog post should help you understand the major differences considering GCC Vs Clang. Both are excellent software but there are differences to discuss.

7👎 2
آیا به طور اتفاقی، شتاب جاذبه زمین، تقریبا برابر مربع عدد پیه؟ در ظاهر شاید تصادفی بیاد ولی نه! شاید واقعا مرتبط باشن. https://roitman.io/blog/91
Show all...
A wonderful coincidence or an expected connection: why π² ≈ g.

Let’s take a brief trip back to our school years and recall some lessons in mathematics and physics. Do you remember what the number π equals? And what is π squared? That’s a strange question too. Of course, it’s 9.87. And do you remember the value of the acceleration due to gravity, g? Of course, that number was drilled into our memory so thoroughly that it’s impossible to forget: 9.81 m/s². Naturally, it can vary, but for solving basic school problems, we typically used this value.

3🔥 1
Repost from N/a
یک ویدیو بامزه و در عین حال آموزنده :) https://www.youtube.com/watch?v=jGCvY4gNnA8
Show all...
What Do Neural Networks Really Learn? Exploring the Brain of an AI Model

Neural networks have become increasingly impressive in recent years, but there's a big catch: we don't really know what they are doing. We give them data and ways to get feedback, and somehow, they learn all kinds of tasks. It would be really useful, especially for safety purposes, to understand what they have learned and how they work after they've been trained. The ultimate goal is not only to understand in broad strokes what they're doing but to precisely reverse engineer the algorithms encoded in their parameters. This is the ambitious goal of mechanistic interpretability. As an introduction to this field, we show how researchers have been able to partly reverse-engineer how InceptionV1, a convolutional neural network, recognizes images. ▀▀▀▀▀▀▀▀▀SOURCES & READINGS ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you: On Interpreting InceptionV1: Feature visualization:

https://distill.pub/2017/feature-visualization/

Zoom in: An Introduction to Circuits:

https://distill.pub/2020/circuits/zoom-in/

The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does:

https://distill.pub/2020/circuits/

OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail:

https://microscope.openai.com/models

Here's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1:

https://microscope.openai.com/models/inceptionv1/mixed3b_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis

Activation atlases:

https://distill.pub/2019/activation-atlas/

More recent work applying SAEs to InceptionV1:

https://arxiv.org/abs/2406.03662v1

Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers:

https://transformer-circuits.pub/

In the video, we cite "Toy Models of Superposition":

https://transformer-circuits.pub/2022/toy_model/index.html

We also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning":

https://transformer-circuits.pub/2023/monosemantic-features/

More recent progress: Mapping the Mind of a Large Language Model: Press:

https://www.anthropic.com/research/mapping-mind-language-model

Paper in the transformers circuits thread:

https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

Extracting Concepts from GPT-4: Press:

https://openai.com/index/extracting-concepts-from-gpt-4/

Paper:

https://arxiv.org/abs/2406.04093

Browse features:

https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html

Language models can explain neurons in language models (cited in the video): Press:

https://openai.com/index/language-models-can-explain-neurons-in-language-models/

Paper:

https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html

View neurons:

https://openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.html

Neel Nanda on how to get started with Mechanistic Interpretability: Concrete Steps to Get Started in Transformer Mechanistic Interpretability:

https://www.neelnanda.io/mechanistic-interpretability/getting-started

Mechanistic Interpretability Quickstart Guide:

https://www.neelnanda.io/mechanistic-interpretability/quickstart

200 Concrete Open Problems in Mechanistic Interpretability:

https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability

More work mentioned in the video: Progress measures for grokking via mechanistic interpretability:

https://arxiv.org/abs/2301.05217

Discovering Latent Knowledge in Language Models Without Supervision:

https://arxiv.org/abs/2212.03827

Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning:

https://www.nature.com/articles/s41551-018-0195-0

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ 🟠 Patreon:

https://www.pa…

3👍 2
Repost from Geek Alerts
Photo unavailableShow in Telegram
استارتاپ فرانسوی Mistral از سه مدل جدید به اسم‌های Codestral Mamba و Mathstral و Mistral NeMo معرفی کرده. روند معرفی‌شون هم خیلی ساده و جمع و جور با فرستادن یه توییت تک‌خطی با لینک وبلاگ‌شونه. قبلاً هم برای معرفی لینک تورنت می‌فرستادن فقط. هر سه مدل معرفی شده اپن‌سورس و زیر لایسنس Apache 2.0 هستن. این لایسنس به ما اجازه توزیع و نشر مجدد این مدل رو می‌ده ولی همراه با بیان اسم شرکت و پروژه. به ترتیب، Codestral Mamba اولین مدل Mambaی این شرکته، مدل دوم اولین مدل مخصوص ریاضیات‌شونه و این مدل رو به یاد ارشمیدس در ۲۳۱۱مین سالگردش و آخرین مدلی که امروز هم معرفی شده که مدل جدید کوچک‌شون با ۱۲ میلیارد پارامتر هست که با همکاری Nvidia ساختنش. هر سه مدل در بنچمارک‌ها به خوبی ظاهر شدن و از اکثر مدل‌های فعلی عملکرد بهتری داشتن. https://mistral.ai/news/mistral-nemo/ https://mistral.ai/news/mathstral/ https://mistral.ai/news/codestral-mamba/ hadi @geekalerts
Show all...
🥰 5
چرا معماری ترانسفورمر توی کارهای محاسباتی از خودش عملکرد بدی نشون میده؟ https://medium.com/autonomous-agents/part-1-scientific-computing-why-transformers-fall-short-in-scientific-computing-812c64c5c149
Show all...
Part-1 (Scientific Computing) ~ Why Transformers Fall Short in Scientific Computing

Transformer based models like LLMs have demonstrated remarkable prowess in natural language processing tasks. However, their limitations…

🔥 3👍 1
Choose a Different Plan

Your current plan allows analytics for only 5 channels. To get more, please choose a different plan.