In this post we solve the problem of spatial location of a signal emitter in Euclidian \(\mathbb{R}^3\) space by means of time difference of arrival (TDOA) at multiple omnidirectional sensors.

## Introduction

Multilateration is a technique that uses multiple omnidirectional sensors in order to isolate the unknown position of a signal emitter in two- or three-dimensional Euclidian space. Two examples of sensors and signals could be microphones listening for sharp noises, or radio receivers listening for radio signals. In any case, these sensors are located at unique known positions where they listen for what is called a signal event. Such events are timestamped based on a synchronized or centralized clock common to all sensors. The signal from an emitter is registered by all sensors only once as the signal wave expands spherically in all directions with constant propagation speed. The time difference between when two sensors register the signal event is called the time difference of arrival (TDOA). Based on the TDOA and the location of each registration, i.e., sensor positions, we can deduce the location of the signal emitter.

## Model definitions

Let sensors \(i=1,\ldots,k\) at positions \(\vec{p}_{i}\) be sensors that output a timestamp \(t_{i}\) for when they register a signal wave with propagation speed \(v\) from a signal emitter at unknown position \(\vec{x}\), signal start time \(t\) and distance \(v\left(t_{i}-t\right)=\|\vec{x}-\vec{p}_{i}\|\) away. If we define sensor \(c\) to be the first sensor to register a signal event, then the signal wave has traveled for \((t_{c}-t)\) time units and a distance of \(v(t_{c}-t)=\|\vec{x}-\vec{p}_{c}\|\) before reaching the first sensor. Furthermore, the TDOA between each sensor’s timestamp and the initial sensor’s timestamp is \((t_{i}-t_{c})\) yielding the the distance \(v\Delta t_{i}=v(t_{i}-t_{c})\) traveled by the signal wave since initially being registered and finally reaching the \(i\)th sensor. From the definitions above we have the following important relationships.

\begin{equation}

\|\vec{x}-\vec{p}_{i}\|=v\left(t_{i}-t\right)\label{eq:main}

\end{equation}

and the equivalent formulations

\begin{equation}

\begin{split}\|\vec{x}-\vec{p}_{i}\|=v(t_{i}-t_{c})+\|\vec{x}-\vec{p}_{c}\|\end{split}

\label{eq:mainSplit}

\end{equation}

\begin{equation}

\|\vec{x}-\vec{p}_{i}\|=v(t_{i}-t_{c})+v(t_{c}-t)\label{eq:mainTime}

\end{equation}

The following figure shows the various geometric distance relationships of the multilateration problem.

## Geometric Interpretation

In general the positions \(\vec{p}_{i},\vec{p}_{c}\) and TDOA \(t_{i}-t_{c}\) of a pair of sensors limits the signal emitter’s position to lay on one sheet of a circular two-sheeted hyperboloid with foci at \(\vec{p}_{i}\) and \(\vec{p}_{c}\). Given that \(c\) is the first of the two sensors to register a signal event, then only the sheet at focus \(\vec{p}_{c}\) is materialized. The associated directrix plane can be found by a known position vector and normal vector. Since the directrix is orthogonal to the vector pointing from sensor \(i\) to sensor \(c\), we know that \(p_{c}-p_{i}\) itself or its unit vector

\[

\frac{\vec{p}_{c}-\vec{p}_{i}}{\|\vec{p}_{c}-\vec{p}_{i}\|}

\]

can serve as a normal vector. Next we need a position vector with endpoint on the directrix plane. One such vector can be defined by the point at which the directrix intersects the line segment between \(\vec{p}_{i}\) and \(\vec{p}_{c}\). This point lays a length \(v(t_{i}-t_{c})\) away from \(p_{i}\) towards \(\vec{p}_{c}\), that is, it lays at the position vector

\[

\vec{p}_{i}+v(t_{i}-t_{c})\frac{\vec{p}_{c}-\vec{p}_{i}}{\|\vec{p}_{c}-\vec{p}_{i}\|}

\]

The following figure illustrates how TDOA can be used to constrain the multilateration problem solution to one sheet of a two-sheeted circular hyperboloid in \(\mathbb{R}^{3}\).

With enough of these constraints, i.e., having enough sensors, we can isolate the signal emitter position to a single point as in the following figure in \(\mathbb{R}^{2}\) space.

## Reduction to Linear Form

The most efficient way to solve the multilateration problem is to reduce the problem to linear form. To determining the location of the signal emitter we can use three unique pairs \(i,j\) of sensors in addtion to an initial discovery sensor \(c\). This reduces the problem to a linear system of three equations, and requires at least five sensor.

\[

\begin{alignedat}{2}\|\vec{x}-\vec{p}_{i}\| & = & v(t_{i}-t_{c})+\|\vec{x}-\vec{p}_{c}\|\\

\|\vec{x}-\vec{p}_{i}\|^{2} & = & \left(v(t_{i}-t_{c})+\|\vec{x}-\vec{p}_{c}\|\right)^{2}\\

\|\vec{x}-\vec{p}_{i}\|^{2} & = & v^{2}(t_{i}-t_{c})^{2}+2v(t_{i}-t_{c})\|\vec{x}-\vec{p}_{c}\|+\|\vec{x}-\vec{p}_{c}\|^{2}

\end{alignedat}

\]

Introducing another sensor \(j\) we can eliminate the \(\|x-p_{c}\|\) term.

\begin{equation}

{2}-2\|\vec{x}-\vec{p}_{c}\|\\ = v(t_{i}-t_{c})+\frac{\|\vec{x}-\vec{p}_{c}\|^{2}-\begin{split}\|\vec{x}-\vec{p}_{i}\|^{2}\end{split}}{v(t_{i}-t_{c})}

\label{eq:tempRes1}

\end{equation}

\begin{equation}

-2\|\vec{x}-\vec{p}_{c}\|\\ = v(t_{j}-t_{c})+\frac{\|\vec{x}-\vec{p}_{c}\|^{2}-\begin{split}\|\vec{x}-\vec{p}_{j}\|^{2}\end{split}}{v(t_{j}-t_{c})}

\label{eq:tempRes2}

\end{equation}

\begin{equation}

v(t_{j}-t_{c})+\frac{\|\vec{x}-\vec{p}_{c}\|^{2}-\begin{split}\|\vec{x}-\vec{p}_{j}\|^{2}\end{split}}{v(t_{j}-t_{c})}\\ = v(t_{i}-t_{c})+\frac{\|\vec{x}-\vec{p}_{c}\|^{2}-\begin{split}\|\vec{x}-\vec{p}_{i}\|^{2}\end{split}}{v(t_{i}-t_{c})}

\label{eq:tempRes3}

\end{equation}

Now, expanding the definitions of the squared distances from each of the sensors and the emitter we get the following.

\begin{equation}

\begin{alignedat}{2}\begin{split}\|\vec{x}-\vec{p}_{i}\|^{2}\end{split}

& = & \left(\vec{x}-\vec{p}_{i}\right)^{T}\left(\vec{x}-\vec{p}_{i}\right)=\vec{x}^{T}\vec{x}-2\vec{p}_{i}^{T}x+\vec{p}_{i}^{T}\vec{p}_{i}\\

\|\vec{x}-\vec{p}_{j}\|^{2} & = & \vec{x}^{T}\vec{x}-2\vec{p}_{j}^{T}\vec{x}+\vec{p}_{j}^{T}\vec{p}_{j}\\

\|\vec{x}-\vec{p}_{c}\|^{2} & = & \vec{x}^{T}\vec{x}-2\vec{p}_{c}^{T}\vec{x}+\vec{p}_{c}^{T}\vec{p}_{c}

\end{alignedat}

\label{eq:xpDefRe}

\end{equation}

This leads to

\begin{equation}

\begin{alignedat}{2}v(t_{j}-t_{c})+\frac{2\left(\vec{p}_{j}^{T}-\vec{p}_{c}^{T}\right)\vec{x}+\vec{p}_{c}^{T}\vec{p}_{c}-\vec{p}_{j}^{T}\vec{p}_{j}}{v(t_{j}-t_{c})} \\ = v(t_{i}-t_{c})+\frac{2\left(\vec{p}_{i}^{T}-\vec{p}_{c}^{T}\right)\vec{x}+\vec{p}_{c}^{T}\vec{p}_{c}-\vec{p}_{i}^{T}\vec{p}_{i}}{v(t_{i}-t_{c})}\end{alignedat}

\label{eq:systemRaw}

\end{equation}

Notice that the quadratic terms have fallen out and we are left with a very solvable linear system of equations

\[

\vec{a}_{ijc}^{T}\vec{x}=b_{ijc}

\]

where

\[

\begin{alignedat}{2}\vec{a}_{ijc} & = & 2\left(v(t_{j}-t_{c})(\vec{p}_{i}-\vec{p}_{c})-v(t_{i}-t_{c})(\vec{p}_{j}-\vec{p}_{c})\right)\in\mathbb{R}^{3}\\

b_{ijc} & = & v(t_{i}-t_{c})\left(v^{2}(t_{j}-t_{c})^{2}-\vec{p}_{j}^{T}\vec{p}_{j}\right)\\

& + & \left(v(t_{i}-t_{c})-v(t_{j}-t_{c})\right)\vec{p}_{c}^{T}\vec{p}_{c}\\

& + & v(t_{j}-t_{c})\left(\vec{p}_{i}^{T}\vec{p}_{i}-v^{2}(t_{i}-t_{c})^{2}\right)\in\mathbb{R}

\end{alignedat}

\]

This can be expressed as just

\[

\textbf{A}x=b

\]

where each pair \(i,j\) creates the matrix \(\textbf{A}\) and vector \(b\) defined by \(b_{ijc}\in b\in\mathbb{R}^{3}\) and \(\vec{a}_{ijc}\in \textbf{A}\in\mathbb{R}^{3\times3}\) such that \(\vec{a}_{ijc}^{T}\) are the rows of \(\textbf{A}\). This can be solved using LP or some other more or less direct method. We will be simply computing the iverse of \(\textbf{A}\) such that

\[

\hat{x}=\textbf{A}^{-1}b=\textbf{A}\backslash b

\]

## Application

In order to aid intuition it might be helpful to translate the problem in to a real world interpretation. The problem analogy is that of submarine localization using passive acoustic sensors.

The submarine, i.e., the signal emitter, is placed somewhere in a six cubic kilometer body of water. The acoustic sensors are placed randomly in a one cubic kilometer body of water. Both bodies of water has the coordinate system origin in its center.

The following MATLAB code embodies this scenario: multilateration.m.

Thanks for the nice effort. Why the code doesn’t work if the number of sensors less than 5? May be I missed something, but should 3 sensors enough to find the three unknown (x,y,z)?

Thanks for the nice effort. Why the code doesn’t work if the number of sensors less than 5? May be I missed something, but should 3 sensors enough to find the three unknown (x,y,z)?

Hello,

thanks for your work. I am just trying to trilaterate the position of a source given 5 sensors.

The first thing is, what do you mean by ‘unique pairs’??? and what is the meaning of receivers ‘c’? Is it a reference receiver?

Hey Andersen,

Do you still have the matlab code for drawing the hyperboles which intersects at position X ?

Thanks

I would love to have the same as well!

Hi, It seems that some part of the derivations are outside the boundaries of the webpage. Can you take look into that? also, what is the paper you are referring to?

I guess you are referring to the paper this post was based on. This post was based on a larger paper I wrote some time ago. What is left here is what worked. Notice that this post doesn’t even have references, so look at it as more of a hands-on thing than a research paper.

Hi, I think some part of the derivations are halfway shown in the page.

I split the derivation up. It is somewhat less readable now, but at least it isn’t truncated.

First of all, thanks for the great tutorial and Matlab script. All very helpful.

I’m curious if anyone experienced any “Rank deficient” warnings when running the Matlab script? I only get the warning for certain positions and it occurs on the line with the backslash operator (=> x_hat_inv = A\b;)

I found that whenever I get this warning, the solution is off quite a bit.

Any ideas?

This is indeed a problem which can occur. It means that the system of linear equations isn’t solvable to a unique answer. Two or more of the equations of A=b aren’t linearly independent. An example of linear dependence is x = y+z and 2x = 2y+2z. Both equations provide the same information.

I remember talking to someone who has worked on a similar problem, and he said that he solved this issue by perturbing the TDOAs slightly, i.e., (evenly) adding random noise. Do that repeatedly and solve each system. You will get many slightly different results with predictable errors, but averaged out the position will be accurate. Haven’t tried this, but might be a practical solution.

I’m a surveyor engineering student in Senegal Thank, nice think but I need a help about multilatération.I would like to know on what basis the total station to make his choice on the different possible solution developed in bilatration and how to know with precision the position of a point M which is chercche in multilateration in his indecision area

I would be very interested to learn that as a matter of extreme urgency.

I am sorry but I don’t think I understood the question. In the simulation the precision is pretty much perfect, but in the real world your equipment will be limiting factor. I would measure the precision empirically when applying this method in practice.

I’ve ported the matlab example provided here to python. The code can be found here.

https://github.com/paulhayes/MultilaterationExample

Thank you so much for contributing. I have been thinking of doing a port too; to R for instance.

Amazing work here. Great explanation, and easy to understand matlab example too!

I noticed one small thing about the matlab example. You create t, a table of times relative to the first sensor to detect the emitter c. However when making matrix A and b, you always subtract t(c) from t(i) and t(j). Now each value of t(c) is in fact 0, so the result is still the same, but it does expand the calculation somewhat making it harder to read.

Nice catch Paul. You are right that in my specific code example we don’t need to subtract t(c) since it is zero. However, t(c) does not have to be zero in general, it just happened to be it in my example. In a real scenario you would use a running clock with times based on something like Unix time. Thus, for instance, t(j)-t(c) makes sense as a time difference, or TDOA. I hope that clears it up, if not and I’ve made a mistake please let me know and I’ll try to correct it.

Yes your link to the MATLAB file is no longer working. Would it be possible to get it fixed? A friend and I are working on an open source gun shot detection system (https://github.com/twinn/ShotTracker) that could be deployed in neighborhoods with high crime rate. I’m working on the software to do the calculations you have described and the concrete submarine example would be helpful in determining if my process is correct.

The link is now fixed. I moved servers and the MIME type wasn’t set up correctly for m-files. You can find it here. Sorry about that. I read your intro to ShotTracker, and I must say that I find it intriguing. Good work, looking forward to see your progress.

Hello, i´am interestested in your system of detection. More likely how do you deal with the TDOA in your calculation of gunshot. I would like to ask how do you calculate the TDOA if you have real data of detection on your detection devices(microphones). If i understand well this matlab file calculates only if i know exact time from emitors to sensors. But in fact i never know this time. I only have the time of arrival of the signal on each senzor.

The MATLAB code I provide here uses timestamps only, and they don’t need to start at zero. In practice the timestamps could be high precision unix time. TDOAs are calculated. In the paper I very explicitly say “TDOA between each sensor’s timestamp and the initial sensor’s timestamp is t(i) – t(c).” Here t(i) and t(c) are timestamps while together t(i) – t(c) is a TDOA.

Thank you very much for making this solution available!

The link to the MATLAB file seems broken, could you please check it?

The link is now fixed. Sorry about that.

Hi

I’m a final year engineering student currently doing my project in this line of work. I’ve tried searching but have had no luck finding the paper that you mention in this post. I was wondering what the title was, as I am quite interested in reading it.

The link should now work. What you see here is the part of the paper which actually worked. The rest of the paper is just a waste of time.

Any idea how this would work with only 4 sensors (the usual 3D multilateration)? What’s the performance difference anyway?

By far the simplest way to solve the three dimensional version of the problem is using five sensors and solving the linear problem as described in the post. The original paper I wrote included a method for solving it with only four sensors using Newton’s method. This failed miserably and only converged 63% of the time, and was 9 orders of magnitude slower than the linear problem. The problem is that the four sensor problem is not convex, and thus hard to solve. To cite the conclusion of the original paper:

“…, from a practical point of view, it is hard to imagine a situation where you are stuck with having to settle with exactly four sensors and not able to introduce a fifth sensor. It means the difference between working with a non-convex problem using greedy minimization which don’t guarantee correct solutions versus having a LP problem which is readily solvable with a multitude of methods.”

I have been looking into this for a university project as well. After a lot of searching I found a closed-form analytical solution for the problem in 2 dimensions using three receivers.

This is detailed in Chan’s article here: http://www.vtvt.ece.vt.edu/research/references/uwb/ranging_mobile_location/estimator_hyperbolic_location.pdf

I think it would be straightforward to apply it to 3-dimensional space with 4 receivers. I have made a MATLAB implementation for the 2-D case if you are interested.

Snorre

Takk Snorre. This looks very interesting.

Hello,

Thanks for your link !

Can you share with us your matlab implementation in 2D ?

Thanks !