Frozen AI News archive

Gemini Nano: 50-90% of Gemini Pro, <100ms inference, on device, in Chrome Canary

The latest **Chrome Canary** now includes a feature flag for **Gemini Nano**, offering a prompt API and on-device optimization guide, with models Nano 1 and 2 at **1.8B** and **3.25B** parameters respectively, showing decent performance relative to Gemini Pro. The base and instruct-tuned model weights have been extracted and posted to **HuggingFace**. In AI model releases, **Anthropic** launched **Claude 3.5 Sonnet**, which outperforms **GPT-4o** on some benchmarks, is twice as fast as Opus, and is free to try. **DeepSeek-Coder-V2** achieves **90.2%** on HumanEval and **75.7%** on MATH, surpassing GPT-4-Turbo-0409, with models up to **236B** parameters and **128K** context length. **GLM-0520** from **Zhipu AI/Tsinghua** ranks highly in coding and overall benchmarks. **NVIDIA** announced **Nemotron-4 340B**, an open model family for synthetic data generation. Research highlights include **TextGrad**, a framework for automatic differentiation on textual feedback; **PlanRAG**, an iterative plan-then-RAG decision-making technique; a paper on **goldfish loss** to mitigate memorization in LLMs; and a tree search algorithm for language model agents.

Canonical issue URL

AI News for 6/21/2024-6/24/2024. We checked 7 subreddits, 384 Twitters and 30 Discords (415 channels, and 5896 messages) for you. Estimated reading time saved (at 200wpm): 660 minutes. You can now tag @smol_ai for AINews discussions!

The latest Chrome Canary now has Gemini Nano in a feature flag:

You'll now have access to the model via the console: http://window.ai.createTextSession()

image.png

Nano 1 and 2, at a 4bit quantized 1.8B and 3.25B parameters has decent performance relative to Gemini Pro:

image.png

and you should see this live demo of how fast it runs image.png

Lastly, the base model and instruct-tuned model weights have already been extracted and posted to HuggingFace.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3 Opus, best of 4 runs. We are working on clustering and flow engineering with Haiku.

AI Model Releases and Benchmarks

AI Research Papers

AI Applications and Demos

Memes and Humor


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

Stable Diffusion / AI Image Generation

ChatGPT / AI Assistants


AI Discord Recap

SPECIAL NOTE: As we do for fresh frontier model launches, we are displaying the differences in output from running the same prompts across Claude 3.5 and 3 and GPT4o.

Claude 3 Sonnet

1. LLM Performance Benchmarking and Advancements

2. Techniques for Efficient LLM Training and Inference

3. Open-Source AI Frameworks and Community Efforts

4. Multimodal AI and Generative Modeling

Claude 3.5 Sonnet

  1. LLM Performance and Benchmarking Debates:
  1. Optimizing LLM Training and Inference:
  1. Open-Source AI Development and Collaboration:
  1. Multimodal AI and Creative Applications:
  1. AI Integration and Practical Applications:

Claude 3 Opus

  1. LLM Performance and Benchmarking:
  1. Optimizing LLM Training and Inference:
  1. Open-Source AI Frameworks and Collaborations:
  1. Multimodal AI and Generative Models:
  1. AI Ethics, Legality, and Accountability:

GPT4T (gpt-4-turbo-2024-04-09)

**1. AI Hardware Evolves but Costs Spiral:

**2. Optimization Takes Center Stage in AI Tools:

**3. AI Breaks New Ground in Multifaceted Applications:

**4. AI-centric Education and Jobs Expanding:

**5. Miscellaneous AI Developments Reflect Wide Impact:

GPT4O (gpt-4o-2024-05-13)

Model Optimization and LLM Innovations

Open-Source Developments and Community Efforts

AI in Production and Real-World Applications

Operational Challenges and Support Queries

Upcoming Technologies and Future Directions


PART 1: High level Discord summaries

HuggingFace Discord

Juggernaut or SD3 Turbo for Virtual Realities?: While Juggernaut Lightning is favored for its realism in non-coding creative scenarios, SD3 Turbo wasn't discussed as favorably, suggesting that choices between models are influenced by specific context and goals.

Quantum Leap for PyTorch Users: Investments in libraries like PyTorch and HuggingFace are recommended over dated ones like sklearn, and use of bitsandbytes and precision modifications such as 4-bit quantization can assist with model loading on constrained hardware.

Meta-Model Mergers and Empathic Evolutions: The Open Empathic project is expanding with contributed movie scene categories via YouTube, while merging tactics for UltraChat and Mistral-Yarn elicited debate, with references to mergekit and frankenMoE finetuning as noteworthy techniques for improving AI models.

Souped-Up Software and Services: A suite of contributions surfaced, including Mistroll 7B v2.2's release, simple finetuning utilities for Stable Diffusion, a media-to-text conversion GUI using PyQt and Whisper, and the new AI platform Featherless.ai for serverless model usage.

In Pursuit of AI Reasoning Revelations: Plans to unravel recent works on reasoning with LLMs are brewing, with Understanding the Current State of Reasoning with LLMs (arXiv link) and repositories like Awesome-LLM-Reasoning and its namesake alternative repository link earmarked for examination.


Unsloth AI (Daniel Han) Discord


Stability.ai (Stable Diffusion) Discord


CUDA MODE Discord


LM Studio Discord

VRAM Crunch and Hefty Price Tags: Engineers highlighted the VRAM bottleneck when handling colossal models like Command R (34b) Q4_K_S, suggesting EXL2 as a more VRAM-efficient format. For heavy-duty AI work, the NVIDIA DGX GH200, touted for its mammoth memory, remains out of reach financially for most, hinting at thousands of dollars in investment.

Quantum Leaps in LLM Reasoning: Users were impressed with the Hermes 2 Theta Llama-3 70B model, known for its significant token context limit and creative strengths. Conversations around LLMs lack temporal awareness spurred mention of the Hathor Fractionate-L3-8B for its performance when output tensors and embeddings remain unquantized.

Cool Rigs and Hot Chips: On the hardware battlefield, using P40 GPUs with Codestral demonstrated a surge in power utilization to 12 tokens/second. Meanwhile, the iPad Pro’s 16GB RAM was debated for its ability to handle AI models, and the dream of using DX or Vulkan for multi-GPU support in AI was floated in response to the absence of NVlink in 4000 series GPUs.

Patchwork and Plugins: The LLaMa library vexed users with errors stemming from a model's expected tensor count mismatch, whereas deepseekV2 faced loading woes, potentially fixable by updating to V0.2.25. Enthusiasm bubbled for a hypothetical all-in-one model runner that could handle a gamut of Huggingface models including text-to-speech and text-to-image.

Model Engineering and Enigmas: The quaintly named Llama 3 CursedStock V1.8-8B model piqued curiosity for its unique performance, especially in creative content generation. There was chatter about a Multi-model sequence map allowing data flow among several models, and the latest quantized Qwen2 500M model made waves for its ability to operate on less capable rigs, even a Raspberry Pi.


OpenAI Discord


Perplexity AI Discord


Nous Research AI Discord

Boost in Dataset Deduplication: Rensa outperforms datasketch with a 2.5-3x speed boost, leveraging Rust's FxHash, LSH index, and on-the-fly permutations for dataset deduplication.

Model Jailbreak Exposed: A Financial Times article highlights hackers "jailbreaking" AI models to reveal flaws, while contributors on GitHub share a "smol q* implementation" and innovative projects like llama.ttf, an LLM inference engine disguised as a font file.

Lively Debate on Model Parameters: In the ask-about-llms, discussions ranged from the surprisingly capable story generation of TinyStories-656K to assertions that general-purpose performance soars with 70B+ parameter models.

Dataset Synthesis and Classification Enhanced: Members share a Google Sheet for collaborative dataset tracking, explore improvements using the Hermes RAG format, and delve into datasets like SciRIFF and ft-instruction-synthesizer-collection for scientific and instructional purposes.

AI Safety Models Scrutiny and Coursework: #general sees a mix, from Gemini and OpenAI's redaction-capable safety models to the launch of Karpathy's LLM101n course, encouraging engineers to build a storytelling LLM.


Eleuther Discord


Latent Space Discord


Modular (Mojo šŸ”„) Discord


LAION Discord


Cohere Discord


LangChain AI Discord


OpenRouter (Alex Atallah) Discord


OpenInterpreter Discord


LLM Finetuning (Hamel + Dan) Discord

Instruction Synthesizing for the Win: A newly shared Hugging Face repository highlights the potential of Instruction Pre-Training, providing 200M synthesized pairs across 40+ tasks, likely offering a robust approach to multi-task learning for AI practitioners looking to push the envelope in supervised multitask pre-training.

Bringing DeBERTa and Flash Together?: Curiosity is brewing over the possibility of combining DeBERTa with Flash Attention 2, posing the question of potential implementations that leverage both technologies to AI engineers interested in novel model architecture synergies.

Fixes and Workarounds: From a Maven course platform blank page issue solved using mobile devices to the resolution of permission errors after a kernel restart within braintrust, practical troubleshooting remains a staple of community discourse.

Credits Saga Continues: Persistent reports of missing service credits on platforms like Huggingface and Predibase sparked member-to-member support and referrals to respective billing supports. This included a tip that Predibase credits expire after 30 days, suggesting that engineers keep a keen eye on expiry dates to maximize credit use.

Training Errors and Overfitting Queries: Errors in running Axolotl's training command (Modal FTJ) and concerns about LORA overfitting ('significantly lower training loss compared to validation loss') were significant pain points, showcasing the need for vigilant model monitoring practices among AI engineers.


LlamaIndex Discord


Interconnects (Nathan Lambert) Discord


OpenAccess AI Collective (axolotl) Discord


Mozilla AI Discord


Torchtune Discord


tinygrad (George Hotz) Discord


LLM Perf Enthusiasts AI Discord


MLOps @Chipro Discord


The AI Stack Devs (Yoko Li) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Datasette - LLM (@SimonW) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The YAIG (a16z Infra) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

HuggingFace ā–· #general (715 messagesšŸ”„šŸ”„šŸ”„):


HuggingFace ā–· #today-im-learning (3 messages):

Link mentioned: Ashvanth.S Blog - Wrapping your head around Self-Attention, Multi-head Attention: no description found


HuggingFace ā–· #cool-finds (5 messages):

Links mentioned:


HuggingFace ā–· #i-made-this (12 messagesšŸ”„):

Links mentioned:


HuggingFace ā–· #reading-group (5 messages):

Links mentioned:


HuggingFace ā–· #computer-vision (9 messagesšŸ”„):

Link mentioned: Tweet from Science girl (@gunsnrosesgirl3): The evolution of fashion using AI


HuggingFace ā–· #NLP (1 messages):

capetownbali: Let us all know how your fine tuning on LLama goes!


HuggingFace ā–· #diffusion-discussions (2 messages):


Unsloth AI (Daniel Han) ā–· #general (376 messagesšŸ”„šŸ”„):


Unsloth AI (Daniel Han) ā–· #random (108 messagesšŸ”„šŸ”„):


Unsloth AI (Daniel Han) ā–· #help (228 messagesšŸ”„šŸ”„):

Links mentioned:


Unsloth AI (Daniel Han) ā–· #showcase (1 messages):

Link mentioned: Apple and Meta Partnership: The Future of Generative AI in iPhones: Recent discussions between Apple and AI companies like Meta regarding partnerships to integrate generative AI models into Apple's AI system for iPhones have generated significant interest. This articl...


Stability.ai (Stable Diffusion) ā–· #general-chat (583 messagesšŸ”„šŸ”„šŸ”„):

Links mentioned:


CUDA MODE ā–· #general (17 messagesšŸ”„):

Links mentioned:


CUDA MODE ā–· #torch (4 messages):


CUDA MODE ā–· #cool-links (1 messages):

Link mentioned: Guide-NVIDIA-Tools/Chapter09 at main Ā· CisMine/Guide-NVIDIA-Tools: Contribute to CisMine/Guide-NVIDIA-Tools development by creating an account on GitHub.


CUDA MODE ā–· #jobs (1 messages):

Link mentioned: GitHub - CisMine/Parallel-Computing-Cuda-C: Contribute to CisMine/Parallel-Computing-Cuda-C development by creating an account on GitHub.


CUDA MODE ā–· #beginner (3 messages):


CUDA MODE ā–· #torchao (28 messagesšŸ”„):

Links mentioned:


CUDA MODE ā–· #off-topic (18 messagesšŸ”„):


CUDA MODE ā–· #hqq (2 messages):


CUDA MODE ā–· #llmdotc (465 messagesšŸ”„šŸ”„šŸ”„):

Links mentioned:


CUDA MODE ā–· #rocm (2 messages):


CUDA MODE ā–· #bitnet (25 messagesšŸ”„):

Link mentioned: The next tutorials Ā· Issue #426 Ā· pytorch/ao: From our README.md torchao is a library to create and integrate high-performance custom data types layouts into your PyTorch workflows And so far we've done a good job building out the primitive d...


LM Studio ā–· #šŸ’¬-general (312 messagesšŸ”„šŸ”„):

Links mentioned:


LM Studio ā–· #šŸ¤–-models-discussion-chat (116 messagesšŸ”„šŸ”„):

Links mentioned:


LM Studio ā–· #🧠-feedback (4 messages):


LM Studio ā–· #āš™-configs-discussion (9 messagesšŸ”„):

Link mentioned: NVIDIA DGX GH200: Massive memory supercomputing for emerging AI


LM Studio ā–· #šŸŽ›-hardware-discussion (18 messagesšŸ”„):


LM Studio ā–· #🧪-beta-releases-chat (3 messages):


LM Studio ā–· #avx-beta (1 messages):

cdrivex4: Yes ok.. Sounds like fun


LM Studio ā–· #model-announcements (1 messages):


LM Studio ā–· #šŸ› -dev-chat (12 messagesšŸ”„):


OpenAI ā–· #ai-discussions (276 messagesšŸ”„šŸ”„):

Links mentioned:


OpenAI ā–· #gpt-4-discussions (29 messagesšŸ”„):


OpenAI ā–· #prompt-engineering (53 messagesšŸ”„):


OpenAI ā–· #api-discussions (53 messagesšŸ”„):


Perplexity AI ā–· #general (381 messagesšŸ”„šŸ”„):


Perplexity AI ā–· #sharing (12 messagesšŸ”„):

Link mentioned: YouTube: no description found


Perplexity AI ā–· #pplx-api (12 messagesšŸ”„):

Link mentioned: Chat Completions: no description found


Nous Research AI ā–· #off-topic (20 messagesšŸ”„):

Links mentioned:


Nous Research AI ā–· #interesting-links (9 messagesšŸ”„):

Links mentioned:


Nous Research AI ā–· #general (278 messagesšŸ”„šŸ”„):

Links mentioned:


Nous Research AI ā–· #ask-about-llms (15 messagesšŸ”„):

Link mentioned: raincandy-u/TinyStories-656K Ā· Hugging Face: no description found


Nous Research AI ā–· #rag-dataset (12 messagesšŸ”„):

Links mentioned:


Nous Research AI ā–· #world-sim (1 messages):

teknium: https://twitter.com/hamish_kerr/status/1804352352511836403


Eleuther ā–· #general (114 messagesšŸ”„šŸ”„):

Links mentioned:


Eleuther ā–· #research (155 messagesšŸ”„šŸ”„):

Links mentioned:


Eleuther ā–· #scaling-laws (10 messagesšŸ”„):

Links mentioned:


Eleuther ā–· #interpretability-general (3 messages):

Links mentioned:


Eleuther ā–· #lm-thunderdome (6 messages):

Link mentioned: add tokenizer logs info (#1731) Ā· EleutherAI/lm-evaluation-harness@536691d: * add tokenizer logs info

Co-authored-by: Hailey Schoelkopf <[email protected]>


Eleuther ā–· #multimodal-general (2 messages):


Eleuther ā–· #gpt-neox-dev (3 messages):


Latent Space ā–· #ai-general-chat (133 messagesšŸ”„šŸ”„):

Links mentioned:


Latent Space ā–· #ai-announcements (3 messages):

Link mentioned: Tweet from Latent Space Podcast (@latentspacepod): šŸ†•How to Hire AI Engineers a rare guest post (and bonus pod) from @james_elicit and @adamwiggins! Covering: - Defining the Hiring Process - Defensive AI Engineering as a chaotic medium - Tech Choi...


Latent Space ā–· #ai-in-action-club (72 messagesšŸ”„šŸ”„):


Modular (Mojo šŸ”„) ā–· #general (62 messagesšŸ”„šŸ”„):

Links mentioned:


Modular (Mojo šŸ”„) ā–· #šŸ“ŗļø±youtube (1 messages):

Link mentioned: - YouTube: no description found


Modular (Mojo šŸ”„) ā–· #ai (5 messages):

Link mentioned: Haystack | Haystack: Haystack, the composable open-source AI framework


Modular (Mojo šŸ”„) ā–· #šŸ”„mojo (51 messagesšŸ”„):

Links mentioned:


Modular (Mojo šŸ”„) ā–· #performance-and-benchmarks (58 messagesšŸ”„šŸ”„):

Links mentioned:


Modular (Mojo šŸ”„) ā–· #nightly (21 messagesšŸ”„):

Links mentioned:


LAION ā–· #general (102 messagesšŸ”„šŸ”„):

Links mentioned:


LAION ā–· #research (27 messagesšŸ”„):

Links mentioned:


Cohere ā–· #general (117 messagesšŸ”„šŸ”„):

Links mentioned:


Cohere ā–· #project-sharing (10 messagesšŸ”„):

Link mentioned: Cohere Client by Hk669 Ā· Pull Request #3004 Ā· microsoft/autogen: Why are these changes needed? To enhance the support of non-OpenAI models with AutoGen. The Command family of models includes Command, Command R, and Command R+. Together, they are the text-generat...


Cohere ā–· #announcements (1 messages):

Links mentioned:


LangChain AI ā–· #general (100 messagesšŸ”„šŸ”„):

Links mentioned:


LangChain AI ā–· #langchain-templates (21 messagesšŸ”„):

Links mentioned:


LangChain AI ā–· #share-your-work (5 messages):

Links mentioned:


LangChain AI ā–· #tutorials (1 messages):

Link mentioned: Do you even need an AI Framework or GPT-4o for your app?: So, you want to integrate AI into your product, right? Whoa there, not so fast!With models like GPT-4o, Gemini, Claude, Mistral, and others and frameworks li...


OpenRouter (Alex Atallah) ā–· #announcements (1 messages):

Links mentioned:


OpenRouter (Alex Atallah) ā–· #app-showcase (7 messages):

Links mentioned:


OpenRouter (Alex Atallah) ā–· #general (106 messagesšŸ”„šŸ”„):

Links mentioned:


OpenInterpreter ā–· #general (85 messagesšŸ”„šŸ”„):

Links mentioned:


OpenInterpreter ā–· #O1 (17 messagesšŸ”„):

Links mentioned:


OpenInterpreter ā–· #ai-content (5 messages):

Link mentioned: AI Remix: The Wheels on the Bus | Next-Gen Music & Visuals by Suno & LumaLabs: Experience 'The Wheels on the Bus' like never before with this innovative AI-generated remix! Using the latest in GenAI technology, we've collaborated with S...


LLM Finetuning (Hamel + Dan) ā–· #general (33 messagesšŸ”„):

Links mentioned:


LLM Finetuning (Hamel + Dan) ā–· #learning-resources (1 messages):

christopher_39608: Interesting post:

https://x.com/rasbt/status/1805217026161401984


LLM Finetuning (Hamel + Dan) ā–· #hugging-face (6 messages):


LLM Finetuning (Hamel + Dan) ā–· #replicate (3 messages):


LLM Finetuning (Hamel + Dan) ā–· #langsmith (1 messages):


LLM Finetuning (Hamel + Dan) ā–· #jason_improving_rag (1 messages):

jxnlco: nah


LLM Finetuning (Hamel + Dan) ā–· #axolotl (3 messages):


LLM Finetuning (Hamel + Dan) ā–· #wing-axolotl (1 messages):


LLM Finetuning (Hamel + Dan) ā–· #simon_cli_llms (1 messages):

mgrcic: Also available at https://www.youtube.com/watch?v=QUXQNi6jQ30


LLM Finetuning (Hamel + Dan) ā–· #credits-questions (3 messages):


LLM Finetuning (Hamel + Dan) ā–· #fireworks (2 messages):


LLM Finetuning (Hamel + Dan) ā–· #braintrust (25 messagesšŸ”„):


LLM Finetuning (Hamel + Dan) ā–· #predibase (13 messagesšŸ”„):


LlamaIndex ā–· #blog (5 messages):


LlamaIndex ā–· #general (70 messagesšŸ”„šŸ”„):

Links mentioned:


LlamaIndex ā–· #ai-discussion (1 messages):

Link mentioned: Unlocking Efficiency in Machine Learning: A Guide to MLflow and LLMs with LlamaIndex Integration: Ankush k Singal


Interconnects (Nathan Lambert) ā–· #news (17 messagesšŸ”„):

Link mentioned: Multi Blog – Multi is joining OpenAI : Recently, we’ve been increasingly asking ourselves how we should work with computers. Not on or using computers, but truly with computers. With AI. We think it’s one of the most importan...


Interconnects (Nathan Lambert) ā–· #ml-questions (20 messagesšŸ”„):

Links mentioned:


Interconnects (Nathan Lambert) ā–· #ml-drama (13 messagesšŸ”„):

Links mentioned:


Interconnects (Nathan Lambert) ā–· #random (9 messagesšŸ”„):


Interconnects (Nathan Lambert) ā–· #memes (3 messages):


Interconnects (Nathan Lambert) ā–· #reads (4 messages):

Link mentioned: Tweet from Kyle Corbitt (@corbtt): Thrilled to be officially recognized as the strongest model on the AlpacaEval leaderboard. šŸ™‚ https://tatsu-lab.github.io/alpaca_eval/ Quoting Kyle Corbitt (@corbtt) Super excited to announce our ...


OpenAccess AI Collective (axolotl) ā–· #general (33 messagesšŸ”„):

Links mentioned:


OpenAccess AI Collective (axolotl) ā–· #axolotl-dev (1 messages):

lore0012: I am no longer hitting the issue.


OpenAccess AI Collective (axolotl) ā–· #general-help (4 messages):

Link mentioned: [BUG] the argument of parser.add_argument is wrong in tools/checkpoint/convert.py Ā· Issue #866 Ā· NVIDIA/Megatron-LM: Describe the bug https://github.com/NVIDIA/Megatron-LM/blob/main/tools/checkpoint/convert.py#L115 It must be 'choices=['GPT', 'BERT'],' not 'choice=['GPT', 'BER...


OpenAccess AI Collective (axolotl) ā–· #datasets (5 messages):

Link mentioned: GitHub - beowolx/rensa: High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datasets: High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datasets - beowolx/rensa


OpenAccess AI Collective (axolotl) ā–· #axolotl-phorm-bot (5 messages):

Link mentioned: OpenAccess-AI-Collective/axolotl | Phorm AI Code Search): Understand code, faster.


Mozilla AI ā–· #announcements (1 messages):


Mozilla AI ā–· #llamafile (31 messagesšŸ”„):

Links mentioned:


Torchtune ā–· #general (24 messagesšŸ”„):

Links mentioned:


tinygrad (George Hotz) ā–· #general (8 messagesšŸ”„):


tinygrad (George Hotz) ā–· #learn-tinygrad (2 messages):

Link mentioned: make buffer view optional with a flag Ā· tinygrad/tinygrad@bdda002: You like pytorch? You like micrograd? You love tinygrad! ā¤ļø - make buffer view optional with a flag Ā· tinygrad/tinygrad@bdda002


LLM Perf Enthusiasts AI ā–· #claude (1 messages):

Link mentioned: Tweet from Rob Haisfield (robhaisfield.com) (@RobertHaisfield): I was "testing" Sonnet 3.5 @websim_ai + new features (mainly "generate in new tab"). I'm FLOORED by this model's speed, creativity, intelligence šŸ«ØšŸ˜‚ Highlights from the lab t...


MLOps @Chipro ā–· #events (1 messages):

Link mentioned: Inauguration of AWS Cloud Clubs MJCET, Fri, Jun 28, 2024, 10:00 AM | Meetup: Join Us for the Grand Inauguration of AWS Cloud Club MJCET! We are delighted to announce the launching event of our AWS Cloud Club at MJCET! Come and explore the world






{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}