If we repeat a Bernoulli trials with the same parameter and sum the results, we have the binomial distribution.
We therefore have two parameters, \(p\) and \(n\).
\(P(X=x)={n\choose x }p^x(1-p)^{n-x}\)
The mean is \(np\), which can be seen as the trials are independent.
Similarly, the variances can be addeded together giving \(np(1-p)\).
The mass function for the binomial case is:
\(f(x)=\dfrac{n!}{x!(n-x)!}p^k(1-p)^{n-k}\)
This generalises the binomial distribution where there are more than \(2\) outcomes.
\(f(x_1,...,x_n)=\dfrac{n!}{\prod_i x_i!}\prod_i p_i^{x_i}\)
We can use the Poisson distribution to model the number of indepedent events that occur in an a time period.
For a very short time period the chance of us observing an event is a Bernoulli trial.
\(P(1)=p\)
\(P(0)=1-p\)
Let’s consider the chance of repeatedly getting \(0\): \(P(0;t)\).
We can see that: \(P(0;t+\delta t)=P(0;t)(1-p)\).
And therefore:
\(P(0;t+\delta t)-P(0;t)=-pP(0;t))\)
By setting \(p=\lambda \delta t\):
\(\dfrac{P(0;t+\delta t)-P(0;t)}{\delta t}=-\lambda P(0;t))\)
\(\dfrac{\delta P(0;t)}{\delta t}=-\lambda P(0;t)\)
\(P(0;t)=Ce^{-\lambda t}\)
If \(t=0\) then \(P(0;t)=0\) and so \(C=1\).
\(P(0;t)=e^{-\lambda t}\)