# Introduction to Markov Logic Networks (MLNs)

Learn the basics of the Markov logic network and why it should be used.

We'll cover the following

## The approach

Ha et al. (2014) presented another approach using MLNs to recognize players’ goals in an open-ended game. Before we discuss their approach, we’ll first introduce MLNs. Please note that we assume that we have a fundamental knowledge of first-order logic.

MLNs are similar to BNs, which we discussed earlier in this chapter. The main difference is that MLNs don’t have directed edges that represent conditional probability relationships. Instead, they use first-order logic, with weights to represent the probability of each logical statement.

## First-order logic

MLNs were proposed as an approach for combining First-order logic and probabilistic graphical models (Richardson and Domingos, 2006). Using first-order logic, we can represent knowledge in a database, which we can call the knowledge base. The knowledge base represents the state of the world. This is usually represented in the form of predicates, such as $\text{Shot}(Jim)$, which can be true or false. We can then define rules such as: $∀x, \text{Shot}(x) → \text{Killed}(x)$, which means that all people who are shot are killed in the game. Using unification and resolution or logic procedure, we can deduce the fact that $\text{Killed(Jim)}$, by unifying $x/\text{Jim}$.

MLNs are represented as a knowledge base in first-order logic, as discussed above, but with weights attached to each predicate or knowledge clause. This weight then adds flexibility to the logic representation, adding uncertainty; the predicate has a probability of being true or false. This representation can then be viewed as a template for constructing an MN. MNs are similar to BNs,, but are undirected networks—that is, the relationship between variables does not have a specific direction. Similar to BNs, they represent relationships between variables $A \in \{a_1 , a_2 , \dots , a_n )$, where there's a defined joint probability distribution over the variables. Similar to BNs, MNs also has been a subject of research for a long time. The research community has developed various standard algorithms for inference and learning using them. Discussion of the representation and algorithms involved with MNs is out of the scope of this course.

A first-order knowledge base can be thought of as a representation of possible worlds, where each of these rules is a hard constraint. A world does not exist where one of these rules is violated. With MLN, weights are introduced to relax this assumption and add probability to the mix. So, while there can exist a world that violates one of the constraints, it will have a smaller probability than the one that satisfies all constraints. This introduces the fact that being shot may not always result in death, but being shot and dying are connected with some probability $y$.

## What does this mean for current commercial games?

This approach presents a good possibility for modeling complex relationships within games in a logical fashion, enabling us to reason about players’ actions using this model. However, like other approaches discussed above, this approach relies on a large amount of authoring from developers or designers, which may render it infeasible with current complex games.

## Applying this approach to games

Get hands-on with 1200+ tech skills courses.