Ollama
Posts tagged with Ollama.
16 Jun 2025
Running AI models locally on your Mac M1 is easier than you think.
No cloud. No expensive subscriptions. No unnecessary complexity. Just your own hardware and full control.
I just released a 30-minute playbook that shows exactly how to run Ollama fully local on Mac M1:
Full install guide Copy-paste terminal commands Model recommendations tested on M1 hardware Performance optimization tips Local security checklist Bonus cheat sheet included Launch price: $5
14 Jun 2025
Building My Own Sovereign RAG for Secure Code Analysis Lately, I’ve been taking a closer look at some code analysis tools that claim to detect security vulnerabilities in software projects. The idea itself is solid. I got one of these tools recommended to me and decided to dig deeper to see what’s really behind these solutions.
Pretty quickly I noticed a pattern: these platforms are far from cheap. Some offer limited free plans, but we all know how this game works.
14 Jun 2025
Construindo meu próprio RAG soberano para análise de segurança de código Nos últimos tempos, comecei a olhar com mais atenção para algumas ferramentas de análise de código que prometem identificar falhas de segurança em projetos. A ideia é boa. Recebi uma dessas ferramentas como sugestão e fui atrás para entender melhor o que havia por trás da proposta.
Logo de cara percebi um padrão: os preços dessas plataformas não são exatamente convidativos.
21 Apr 2025
Let’s cut the fluff: if you care about privacy, speed, and having full control over your stack, running LLMs locally is no longer optional — it’s survival. Cloud’s nice until it’s not. Especially when your data is the product and your API bill explodes overnight.
I put this guide together because I wanted LLMs running locally, even with limited hardware — no vendor lock-in, no middlemen sniffing packets, just raw local compute.