Introduction to Bayesian Modeling
In the frequentist approach, probabilities are estimated based on observed frequencies in available data.
Bayes Problem: Given the number of times in which an unknown event has happened and failed - Requires the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named.
Bayesian statistics used probability as the only way to describe uncertainty in both data and parameter. Everything that is not known for certain are modeled with distributions and treated as random variables.
Exposed |
Unexposed |
||
Diseased |
a |
b |
m1 |
Not Diseased |
c |
d |
m0 |
n1 |
n0 |
n |
Recall the odds ratio (OR) is the probability of having an outcome (disease) compared to the probability of not having the outcome: ad / bc
Bayesian Statistics takes the point of view that OR and RR are uncertain unobservable quantities.
Prior probability: Model/probability distribution using existing data/knowledge
Posterior probability: Updated model with new and prior data
Bayes Theorem
P(H) = Probability of H
P(H | data) =( P(data | H)*P(H) ) / P(Data) = Posterior probability of H
Monte Carlo Methods
Monte Carlo algorithms are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.
# Step 1: Generate samples
x <- rbinom(1000, 8, 0.5)
# Represent histogram
hist(x, main = "")
# Step 2: Estimate P(X<=2) as
# Proportion of samples <= 2
sum(x <= 2)/1000
## 0.142