TechDogs-"Dominik Tomicevic, CEO Of Memgraph, On Stopping Organizations Getting Fooled By Even The Best LLMs"

Artificial Intelligence

Dominik Tomicevic, CEO Of Memgraph, On Stopping Organizations Getting Fooled By Even The Best LLMs

By Manali Kekade

TechDogs
Overall Rating

Overview

Large Language Models (LLMs) don’t truly think—they mimic reasoning patterns based on their training data. As Dominik Tomicevic, CEO of Memgraph, points out, LLMs lack real cause-and-effect understanding, making them unreliable for complex business decisions that require deep analytical reasoning.

Here is a small introduction to Dominik:

Dominik Tomicevic is the Founder and CEO of Memgraph - the leader in open-source in-memory graph databases purpose-built for dynamic, real-time enterprise applications.

In 2011, Dominik was one of only four people worldwide to receive the Microsoft Imagine Cup Grant, personally awarded by Bill Gates. In 2016, he founded Memgraph, a venture-backed graph database company specializing in high-performance, real-time connected data processing.

In 2017, Forbes recognised Dominik as one of the top 10 Technology CEOs to watch. Today, Memgraph boasts an open-source community of 150,000 members and a portfolio of global 2000 customers, including NASA, Cedars-Sinai, and Capitec Bank. The company’s mission is to deliver knowledge graphs with unprecedented integration and ease of use, setting a new benchmark for knowledge graph solutions.


During the Q&A he puts emphasis on: while LLMs can generate relevant-sounding responses, they fall short when it comes to verifying facts or explaining their logic. To bridge this gap, businesses need a hybrid AI approach, integrating graph-based AI and retrieval-augmented generation (RAG) to enhance LLMs with structured, real-world data. Rather than replacing human decision-makers, AI should act as an intelligent assistant, ensuring accuracy, transparency, and reliability in decision-making.
TD Editor: Large language models (LLMs) have shown remarkable capabilities, but where do you see their biggest limitations or risks for organizations today?

Dominik Tomicevic: LLMs are incredible at generating text that sounds intelligent, but their biggest flaw is that they don’t actually know anything—they’re just predicting words based on probability. This becomes a serious issue when businesses start treating them as sources of truth, leading to misinformation from AI and LLMs, hallucinations, and a lack of traceability.

Despite the advancements we're seeing with tools like OpenAI’s o-series and DeepSeek’s R1, the fundamental challenges of hallucinations, lack of reasoning, and static knowledge remain unresolved. Imagine an LLM confidently telling a financial institution that a nonexistent regulation applies to their business. Without a mechanism to verify facts, misinformation can lead to costly decisions.

A major misconception is that bigger models will solve these problems. They won’t. LLMs don’t think—they mimic reasoning patterns based on their training data. If a business decision requires actual cause-and-effect analysis, an LLM alone won’t cut it. And retraining them? It’s expensive, complex, and often impractical, leaving organizations with static knowledge that quickly becomes outdated.

This is why hybrid approaches like GraphRAG matter. Graphs aren’t just storage systems; they’re dynamic, interconnected networks of meaning that can enhance an LLM’s ability to retrieve, reason, and adapt in real time. Instead of relying on guesswork, structured data provides context, accuracy, and traceability.

The future of AI doesn’t lie in a prolonged quest for “true” AI or Artificial General Intelligence (AGI), or even the next iteration of LLMs—it’s in making AI actually useful for real-world enterprise challenges. That means combining generative AI with structured, verifiable knowledge so businesses can trust their AI-driven decisions.

TD Editor: What are some real-world examples where organizations might mistake LLM-generated outputs for true reasoning, and what risk does this pose?

Dominik Tomicevic: The biggest danger with LLMs isn’t just that they can be wrong—it’s that they are persuasive when they’re wrong. They generate outputs that sound authoritative, making it easy for organizations to mistake confident text for true reasoning.

Take financial fraud detection. An LLM might be asked, “Does this transaction look suspicious?” and respond with something that sounds confident—“Yes, because it resembles known fraudulent patterns.” But does it actually understand the relationships between accounts, historical behavior, or hidden transaction loops? No. It’s just echoing probability-weighted phrases from training data. True fraud detection requires structured reasoning over financial networks—something LLMs alone cannot provide.

Or imagine a legal department using an LLM to summarize case law. The model might generate a compelling argument—but based on fictional citations or misinterpretations of precedent. If a lawyer blindly trusts that output, it could lead to catastrophic legal mistakes in court filings or contract negotiations.

Now, let’s talk about pharmaceutical research. A company might use an LLM to summarize clinical trial results or predict drug interactions. The model could generate a response like “This combination of compounds has shown a 30% increase in efficacy.” But if those trials weren’t conducted together, if key side effects are overlooked, or if regulatory constraints are ignored, the consequences could be severe.

Or consider supply chain optimization. An enterprise might ask an LLM, “What’s the best way to restructure our logistics network?” The model could suggest moving warehouses or adjusting shipment routes based on text it has seen, but does it understand fuel price fluctuations, geopolitical trade restrictions, or real-time inventory data? No. It’s predicting answers based on past data, not analyzing current realities.

Then there’s cybersecurity—a domain where a wrong move can mean massive breaches. Imagine a security team asking an LLM, “How should we respond to this network breach?” The model might suggest steps that sound legitimate but aren’t actually aligned with the organization’s infrastructure, latest threat intelligence, or compliance needs. Following AI-generated cybersecurity recommendations blindly could leave the company even more vulnerable.

And let’s not forget enterprise risk management. Suppose a company asks an LLM, “What are the biggest financial risks for our business next year?” The model might confidently generate a response based on past economic downturns, but it has zero awareness of real-time macroeconomic shifts, government regulations, or industry-specific risks. Without structured reasoning and real-time data integration, this is just educated guessing dressed up as insight.

This is why structured, verifiable data are non-negotiable in enterprise AI. LLMs can generate useful insights, but without a real reasoning layer—like knowledge graphs and graph-based retrieval—businesses are flying blind. The key isn’t just making AI answer questions but ensuring it understands the relationships, logic, and real-world constraints behind the answers.

TD Editor: In a few quotes shared with us, you’ve compared LLMs to “parrots” mimicking learned behavior without genuine understanding. Why do you think this distinction is critical for decision-makers to grasp?

Dominik Tomicevic: Decision-makers need to understand that LLMs do not reason—they remix.

Imagine you’re training a parrot to describe how to fly an airplane. The parrot listens to a pilot, memorizes the exact words, and repeats them fluently. It even sounds confident, saying things like, “Increase throttle, adjust flaps, check altitude.” But if you put that parrot in the cockpit, would you trust it to land the plane? Absolutely not. The parrot doesn’t understand aerodynamics, fuel levels, or emergency procedures—it’s just repeating patterns it learned.

That’s exactly the situation we’re in with LLMs today.

LLMs are phenomenal at sounding intelligent, but they don’t possess causal understanding. They don’t know why things happen—only that certain words tend to follow others in the data they’ve seen. They can describe economic downturns but can’t analyze why they happen. They can summarize medical studies but don’t understand disease progression. They can identify suspicious transactions but don’t trace fraud networks.

This distinction is critical because businesses that mistake LLM-generated text for true reasoning will make costly, high-stakes errors. If an LLM suggests a financial move, a compliance decision, or a supply chain adjustment, what is that based on? Probabilistic pattern-matching—not an actual understanding of cause and effect.

That’s why enterprises must pair LLMs with structured, verifiable knowledge. A knowledge graph, for example, acts as a source of truth—a way to structure relationships between data points so AI isn’t just parroting words but reasoning over real-world facts. Without this, AI remains a high-tech guessing machine, and no serious business should be making decisions based on guesswork.

TD Editor: Many LLMs today rely on techniques like chain-of-thought prompting or few-shot learning. What do you see as the limitations of these approaches when applied to mission-critical business problems?

Dominik Tomicevic: LLMs are packed with clever tricks—Chain-of-Thought Prompting, Few-Shot Learning, Simulated Reasoning, Synthetic Data, and Pseudo-Structure. These techniques make models seem smarter, but they’re just polished shortcuts. LLMs don’t actually understand your business—whether it’s supply chain logistics, fraud detection, or manufacturing. They’re just remixing patterns from the public internet.

This is a major problem when businesses try to use generic LLMs for high-stakes decisions. If you’re dealing with complex relationships—like tracing financial fraud, optimizing production lines, or predicting market shifts—LLMs alone can’t provide the precision, traceability, or security enterprises need. Worse, they introduce compliance risks when they generate plausible but incorrect insights.

Some try to fix this by bolting on naïve RAG—feeding LLMs more documents to pull answers from. But dumping unstructured text into a vector database doesn’t create real reasoning. A pile of PDFs won’t tell you why a supply chain bottleneck is happening. That’s where structured data and knowledge graphs come in. Graphs map real-world relationships, enabling AI to reason over facts, not just retrieve text.

Then there’s the built-in obsolescence issue. Unless an LLM is continuously fine-tuned (which is costly and impractical), it quickly becomes outdated. The better approach? Bring reasoning to the data, not the data to the model. Instead of retraining an LLM endlessly, businesses should structure their knowledge with GraphRAG—combining retrieval with deterministic graph-based reasoning. This way, AI stays context-aware, relevant, and auditable—without hallucinations or infinite retraining cycles.

If we want AI to be truly useful at scale, we can’t just make LLMs bigger. We need to make them smarter—and that starts with structured, connected knowledge.

TD Editor: For executives considering LLM adoption, what are the top three questions they should ask to avoid being “fooled” by the technology?

Dominik Tomicevic:

  1. “How does this model verify what it’s saying?” If the answer is “It doesn’t,” you’ve got a problem. AI must be able to trace its outputs back to authoritative data sources, not just generate plausible text.

  2. “What happens when the model is wrong?” If failure modes aren’t well understood, the risk is unpredictable. What’s the fallback mechanism when an LLM makes an error?

  3. “Can this system provide reasoning, not just responses?” If it’s only generating fluent text without structured reasoning capabilities, it’s just a fancy autocomplete, not a decision-making tool.

TD Editor: What mindset or organizational shift is necessary to ensure AI adoption aligns with business goals and ethical standards?

Dominik Tomicevic: The most significant mindset shift is to stop treating LLMs as standalone decision-makers.

AI should be an assistant, not an oracle. Organizations need to build AI systems that are auditable, explainable, and grounded in real data. That means combining LLMs with structured knowledge—whether through GraphRAG, knowledge graphs, or other verifiable frameworks.

Another shift is rethinking AI governance. Who is responsible for AI-generated outputs? How do we ensure bias doesn’t creep in? Companies need clear policies before deploying AI in production.

And finally, AI adoption should be business-first, not hype-driven. Too many companies are rushing to deploy LLMs without understanding their limitations. The best approach? Start with actual business problems and build AI solutions that integrate reasoning, not just generation.

The future of AI isn’t just about bigger models—it’s about smarter architectures. The organizations that embrace hybrid reasoning will be the ones who truly unlock AI’s potential.

Enjoyed what you've read so far? Great news - there's more to explore!

Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.

Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.

Dive into TechDogs' treasure trove today and Know Your World of technology!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light