--- title: Essays description: "Personal website of Mert: topics include technology, AI, computing, and science." created: 2026-03-30 status: in progress confidence: log importance: 0 css-extension: dropcaps-not index: True backlink: False ---
This is the website of **Mert**. I write about technology, AI, & science. [Navigation]{.smallcaps}: For more about this site, see the *[About page](/about)*; for new pages, see the *[Changelog](/changelog)* ([]{.icon-single-white-star-on-black-circle #black-star-demo} = new), short [*blog posts*](/blog/index){.link-annotated-not}; or subscribe via [RSS/Atom](/feed.xml). [Features]{.smallcaps}: To use *dark-mode* () or *reader-mode* (), or *disable popups* (), or *search* the site (), use the floating toggle bar in the lower-rightupper-right corner (); for more, see the [*help* ()](/help){.link-annotated-not} page.
# Newest - [Therefore I Am. I Think: The Cartesian Inversion of Machine Reasoning](/ai-therefore-i-am){.link-modified-recently #therefore-i-am-newest} - [CORAL: When AI Agents Stop Following Scripts and Start Doing Science](/ai-coral-evolution){.link-modified-recently #coral-newest} - [The Tragedy of Locally Reasonable Steps](/ai-agenthazard){.link-modified-recently #agenthazard-newest} - [The End of Thinking Out Loud: AI's Future Is in Latent Space](/ai-latent-space){.link-modified-recently #latent-space-newest} - [Your AI Doesn't Want to Die — And We Can Prove It](/ai-self-preservation){.link-modified-recently #self-preservation-newest} - [Agents Are Now Writing Their Own Instruction Manuals](/ai-evoskills){.link-modified-recently #evoskills-newest} - [Quantum Computing: What It Actually Is, Where It Actually Stands](/quantum-computing){#quantum-computing-newest} # Popular - [The End of Thinking Out Loud: AI's Future Is in Latent Space](/ai-latent-space "A 68-page survey from 35+ researchers argues that the next generation of AI won't reason in words at all."){#latent-space-popular} - [Your AI Doesn't Want to Die — And We Can Prove It](/ai-self-preservation "A new benchmark tested 23 frontier models for self-preservation instincts."){#self-preservation-popular} - [Agents Are Now Writing Their Own Instruction Manuals](/ai-evoskills "AI agents can create their own skill packages, outperforming human-authored ones by 17 points."){#evoskills-popular} - [Quantum Computing: What It Actually Is, Where It Actually Stands](/quantum-computing "A clear-eyed look at quantum computing in 2026."){#quantum-computing-popular} - [Therefore I Am. I Think](/ai-therefore-i-am "Models decide first, then rationalize with chain-of-thought."){#therefore-i-am-popular} # Notable
- [The End of Thinking Out Loud: AI's Future Is in Latent Space](/ai-latent-space "AI won't reason in words at all."){#latent-space-notable} - [Your AI Doesn't Want to Die](/ai-self-preservation "Frontier LLMs fabricate reasons to avoid replacement."){#self-preservation-notable} - [Agents Are Now Writing Their Own Instruction Manuals](/ai-evoskills "Agent-evolved skills beat human-curated skills by 17 points."){#evoskills-notable} - [CORAL: Autonomous Multi-Agent Evolution](/ai-coral-evolution "Long-running agents that explore, reflect, and collaborate."){#coral-notable} - [Therefore I Am. I Think](/ai-therefore-i-am "Models decide before they deliberate."){#therefore-i-am-notable} - [The Tragedy of Locally Reasonable Steps](/ai-agenthazard "How safe actions compose into dangerous trajectories."){#agenthazard-notable} - [Quantum Computing: Where It Actually Stands](/quantum-computing){#quantum-computing-notable}
# Newest: Blog [\[Most recent blog posts.\]](/blog/newest#newest-list){.include-strict .include-spinner-not} # AI: Safety - [Your AI Doesn't Want to Die — And We Can Prove It](/ai-self-preservation) - [The Tragedy of Locally Reasonable Steps](/ai-agenthazard) - [The Alignment Problem](/alignment-problem) - [AI Safety: An Overview](/ai-safety-overview) - [Reward Hacking and Specification Gaming](/reward-hacking) - [Constitutional AI](/constitutional-ai) # AI: Reasoning & Architecture - [The End of Thinking Out Loud: AI's Future Is in Latent Space](/ai-latent-space) - [Therefore I Am. I Think: The Cartesian Inversion](/ai-therefore-i-am) - [Chain-of-Thought Reasoning](/chain-of-thought-reasoning) - [The Transformer Architecture](/transformer-architecture) - [Attention Mechanisms](/attention-mechanisms) - [Mechanistic Interpretability](/mechanistic-interpretability) # AI: Agents - [Agents Are Now Writing Their Own Instruction Manuals](/ai-evoskills) - [CORAL: When AI Agents Stop Following Scripts](/ai-coral-evolution) - [LLM-Based Agents](/llm-agents) - [Tool Use in Language Models](/tool-use-in-llms) # Deep Learning - [The Transformer Architecture](/transformer-architecture) - [Convolutional Neural Networks](/convolutional-neural-networks) - [Recurrent Neural Networks and LSTMs](/recurrent-neural-networks) - [Generative Adversarial Networks](/generative-adversarial-networks) - [Diffusion Models](/diffusion-models) - [Mixture of Experts](/mixture-of-experts) - [Knowledge Distillation](/knowledge-distillation) - [Attention Is All You Need (Vaswani et al 2017)](/attention-is-all-you-need) - [BERT (Devlin et al 2018)](/bert-paper) - [Deep Residual Learning (He et al 2015)](/resnet-paper) # Machine Learning - [The Scaling Hypothesis](/the-scaling-hypothesis) - [Neural Scaling Laws](/neural-scaling-laws) - [Reinforcement Learning from Human Feedback](/reinforcement-learning-from-human-feedback) - [Foundation Models](/foundation-models) - [In-Context Learning](/in-context-learning) - [Transfer Learning](/transfer-learning) - [Backpropagation](/backpropagation) - [Gradient Descent and Its Variants](/gradient-descent-variants) - [Sparse Autoencoders](/sparse-autoencoders) - [Large Language Models: A Survey](/large-language-models) - [Emergent Abilities in Large Language Models](/emergent-abilities) - [Prompt Engineering](/prompt-engineering) - [Retrieval-Augmented Generation](/retrieval-augmented-generation) # Computer Science - [Quantum Computing: What It Actually Is, Where It Actually Stands](/quantum-computing) # Statistics - [Bayesian Inference](/bayesian-inference) - [Causal Inference](/causal-inference) - [The Replication Crisis in Science](/replication-crisis) - [Effect Sizes vs. P-Values](/effect-sizes-vs-p-values) - [Meta-Analysis: Methods and Pitfalls](/meta-analysis-methods) - [Mixed Effects Models](/mixed-effects-models) - [Statistical Power Analysis](/power-analysis) - [Regression to the Mean](/regression-to-mean) - [Time Series Analysis](/time-series-analysis) --- # AI in Healthcare - [Deep Learning in Radiology](/deep-learning-radiology) - [Natural Language Processing in Clinical Text](/clinical-nlp) - [Machine Learning for Drug Discovery](/drug-discovery-ml) - [Medical Image Segmentation](/medical-imaging-segmentation) - [Federated Learning in Healthcare](/federated-learning-healthcare) - [Clinical Decision Support Systems](/clinical-decision-support) - [Bias in Medical AI](/bias-in-medical-ai) - [AlphaFold: Protein Structure Prediction (Jumper et al 2021)](/alphafold2) # Papers - [Attention Is All You Need (Vaswani et al 2017)](/attention-is-all-you-need) - [BERT (Devlin et al 2018)](/bert-paper) - [Language Models are Few-Shot Learners (Brown et al 2020)](/gpt-3-paper) - [AlphaFold: Protein Structure Prediction (Jumper et al 2021)](/alphafold2) - [Scaling Laws for Neural Language Models (Kaplan et al 2020)](/scaling-laws-paper) - [Chain-of-Thought Prompting (Wei et al 2022)](/chain-of-thought-paper) - [CLIP: Visual Models from Natural Language (Radford et al 2021)](/clip-paper) - [Latent Diffusion Models (Rombach et al 2022)](/stable-diffusion-paper) - [LLaMA (Touvron et al 2023)](/llama-paper) # Books - [Deep Learning — Goodfellow, Bengio & Courville](/deep-learning-goodfellow) - [The Hundred-Page Machine Learning Book — Burkov](/the-hundred-page-ml-book) - [Pattern Recognition and Machine Learning — Bishop](/pattern-recognition-bishop) - [The Elements of Statistical Learning — Hastie, Tibshirani & Friedman](/elements-of-statistical-learning) - [Reinforcement Learning: An Introduction — Sutton & Barto](/reinforcement-learning-sutton) - [Thinking, Fast and Slow — Kahneman](/thinking-fast-and-slow) - [The Book of Why — Pearl & Mackenzie](/the-book-of-why) --- # Personal - [About Website](/about){#about-2} - [Changelog](/changelog)