Bayesian inference is the process of integrating observational data and prior beliefs to form better estimates of what is happening in the world, and it has long been suspected to be an important mechanism by which humans operate in complex environments. However, exact Bayesian inference is intractable for most real-life problems. To perform inference approximately, we need simple abstractions of the data, like recognizing objects and their positions from a scene with a multitude of colors and shades. I am proposing a general computational framework that achieves this by combining Bayesian inference algorithms and artificial neural networks. The latter of which can flexibly learn the suitable abstractions from data. I will build models under this framework to solve complex inference problems in neuroscience and cognitive psychology, and I hope to find evidence of how humans make inferences in a world filled with uncertainty.