Given a statistical model and data, inference allow you to “infer” useful generalization (hypothesis). Probabilistic inference typically means calculating posterior distribution given the data. For example, presence of spider web (data/evidence) may increase the likelihood that no one has used that door for sometime (hypothesis).
Learning, on the other hand, implies learning the connection between the evidence and hypothesis. For example, in a statistical model, learning may be learning parameters/weights of the model that minimizes errors in observations.
Finally, in a Bayesian setting, the parameters themselves are random variables and therefore can be inferred as well. Bayesian learning is an instance of inference.
Here are some links that will give more information on Inference and Learning andtheir applications.
- Tutorial on Inference and Learning in Bayesian Networks
- Methods of Inference and Learning for Performance Modeling of Parallel Applications
- Random Walk Inference and Learning in A Large Scale Knowledge Base
- Advances in Algorithms for Inference and Learning in Complex Probability Models
- Discrete Inference and Learning in Artificial Vision – Course at CourseEra