We are Causal Perception and Reasoning research group based in HCMC, Vietnam. Our group’s mission is built on a simple but powerful: true AI must move beyond correlation-based Perception (recognizing what is in the data) to achieve Causal Reasoning (understanding why it happens). This focus on causality is the common thread that connects our diverse research areas. We don’t just build models that see, we build models that understand.

Our Core Research

  1. Causal Video Understanding
    • Go beyond what is happening in a video to understand why it’s happening.
    • Employ VLMs to perceive and “ground” raw video into a stream of textual events, actions, and expressions.
    • Employ LLMs and causal frameworks to reason about the causal triggers and dynamics that connect these events.
    • Analyze causal chains in complex scenes, from social interactions to abnormal event detection.
  2. Causal Document Intelligence
    • Use VLMs to understand document layout and structure as a causal graph.
    • Employ LLMs to reason over the semantic content of extracted information.
    • Infer the logical and causal relationships between text, tables, and figures.
  3. Causal AI in Healthcare
    • Focus on causal inference for trustworthy, safe medical diagnostics.
    • Apply VLMs to find causal biomarkers in medical images by linking them to clinical notes.
    • Use LLMs to extract causal insights from electronic health records (EHRs).
  4. Physics-Informed Causal AI
    • Build causal models for complex physical processes, such as battery degradation or impedance spectroscopy, to discover underlying scientific mechanisms.
    • Leverage LLMs to parse sensor logs and VLMs for visual inspection to find the true cause of abnormal events.
    • Develop causal “digital twins” that enable counterfactual simulations for process optimization.
  5. Foundations of Causal LLMs & Neuro-Symbolic AI
    • Investigate the causal reasoning capabilities (and failures) of LLMs and VLMs.
    • Develop novel neuro-symbolic architectures that bridge perception and logic.
    • Build models capable of counterfactual reasoning (what-if scenarios) by augmenting LLM.
    • Develop new neuro-symbolic architectures that integrate large models (neuro) with formal causal logic (symbolic).

We are actively growing and seeking passionate collaborators to build the future of causal AI.