Dot AI Conference

Paris, France

TLDA (Too Long Didn't Attend)

Conference was ok. What I found a bit confusing was that it was unclear who the target audience is. Most of the talks can be summed up by "Agents, Agents everywhere". Nearly no mention of local models (Qwen was used in one of the demos).

The motto of the conference was Agency.


Agency > Intelligence

Karpathy on X:

I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence, various entertainment/media, obsession with IQ etc. Agency is significantly more powerful and significantly more scarce. Are you hiring for agency? Are we educating for agency? Are you acting as if you had 10X agency?

Grok's explanation:

Agency, as a personality trait, refers to an individual's capacity to take initiative, make decisions, and exert control over their actions and environment. It's about being proactive rather than reactive—someone with high agency doesn't just let life happen to them; they shape it. Think of it as a blend of self-efficacy, determination, and a sense of ownership over one's path.

People with strong agency tend to set goals and pursue them with confidence, even in the face of obstacles. They're the type to say, "I'll figure it out," and then actually do it. On the flip side, someone low in agency might feel more like a passenger in their own life, waiting for external forces—like luck, other people, or circumstances—to dictate what happens next.


Talks

Katia Gil Guzman - OpenAI

Pitched Codex. No secret to using it: give it context, prefer smaller tasks, let it work when you "chill".

Gael Varoquaux - @inria

Author of scikit-learn. Talked about data analysis and the difficulties of creating datasets, unbiased datasets, overfitting etc. Demoed skrub-data.

Stanislas Polu (most interesting talk)

Talked about recent advances. LLMs winning gold in Math competition. The interesting thing: they didn't train a new model, but rather used Gemini/GPT-5 in a proposer-verifier model.

On one side: agents that propose solutions. On the other: an agent that verifies and gives feedback.

Based on this idea started srchd - looks very interesting, going to burn some credits on this.

Vaibhav Gupta (second most interesting)

Talked about BAML. I didn't know about BAML, but it seems to fix a problem I have first hand. For "Snap to Learn" I'm calling Gemini to process uploaded images and return structured JSON. Some users have uploaded images that resulted in broken JSON. BAML promises to fix this (need to try).

Claire Gouze

Speedtalk. Data and getnao.io.

Tejas Chopra - Netflix

Interesting talk - claimed that compute is no longer the bottleneck, but rather data and data access. Models don't fit on one GPU, the GPU processes data way faster than it's delivered, so GPUs stay idle waiting for data.

Alex Laterre - Instadeep

No idea what he talked about.

Bertrand Charpentier

Good talk about energy use of LLMs and optimizations to reduce compute/energy. The trick: somehow convert model operations to hardware instructions, making everything faster and efficient.

Fabien Potencier

Author of Symfony PHP, now founder of Upsun PaaS. Cool guy (maybe biased as someone who worked with PHP and Symfony). A real hardcore dev with 40 years of experience. Talked about using LLMs for the boring stuff.

Nnenna N'Dukwe

No idea what she talked about :(

Viktoria Seemann - Databricks

Pitching Databricks... a RAG system...

Remi Louf - dottxt.ai

Funny speedtalk and demo. LLMs produce unstructured output, yet as devs we need structured output. Their tool constrains output to a particular format.

"Structure is learned behavior, not guaranteed"

For the moment LLMs try to follow structure but fail from time to time. We're somehow ok with failure because it's an LLM, but we wouldn't be ok if it was software. With their libraries, LLMs become more reliable and composable (unix moment for LLMs).

Difference to BAML: his tools are the last line of defense and won't incur more token cost or a roundtrip to the model (which BAML does if the model returns wrong data).

Natalia Segal - Nvidia

Talked about Nvidia Cosmos - a foundational model trained to understand physics. It's open source.

Use cases:

  • Detecting AI-generated videos (understands physics, so can spot impossible movements)
  • Analyzing behavior in videos (e.g., detecting package theft based on suspicious behavior)
  • Safety instruction compliance verification

Alex Palcuie - Anthropic

Most memorable: comparison of AI Agents with autonomous driving vehicles. Claimed we're currently at Level 2 (partial automation) where a human needs to verify output.


Sponsors

  • GitLab - pitched Knowledge Graph - claimed context is better than grepping and way faster
  • iExec - confidential computing using TEE, compute data without revealing it
  • Auth0 - agent authentication and authorizing agents on behalf of users
  • Dataiku - RAG stuff
  • Linkup - search for AI apps, basically RAG for better context
  • leboncoin - ML for fraud prevention (French Kleinanzeigen alternative)
  • Upsun - Platform as a Service
  • Doctolib - Using TTS pipeline (1 second latency), 35 members in AI research team, motto: "experiment, fail, learn"
  • Alpic - hosting for MPCs
  • Gladia - speech to text (alternative to Eleven Labs, need to test)

Tools & Links Discovered

Tech Stack Tips

  • POCs: Vercel + Supabase
  • Telemetry: Grafana + Prometheus
  • Models: GPT for agents, Qwen for other tasks

Random Notes

"The future is not evenly distributed"

Met cool people and got recommendations:

Recommended Resources