Hi, you are logged in as , if you are not , please click here
You are shopping as , if this is not your email, please click here

F61 IMSS Annual Lecture Workshop Evaluating Forecasts & Training Forecast Models

Info
Location
Attendee Categories
Contact
More Info

Event Information

IMSS 2025
Date of Event
9th June 2025
Last Booking Date for this Event
5th July 2025

Description

In forecast evaluation, a scoring rule provides an evaluation metric for probabilistic predictions or forecasts. That is, the scoring rule assigns a numerical score to the forecast by comparing the forecast and the realised observation. A scoring rule is called proper if it has the property that the score is optimised in expectation when the true data distribution is issued as the forecast. This property is considered a necessary condition for a decision-theoretically principled forecast evaluation. The class of proper scoring rules is large and diverse, so that proper scoring rules may evaluate different aspects of the forecast. For example, it has been stated that the aim of probabilistic forecasting should be to maximise the sharpness of the forecast subject to calibration. That is, there should be statistical compatibility between the predictive distribution and the observation while, at the same time, the forecast should provide as much information on the observation as possible. Different proper scoring rules are able to assess these properties to a different degree. On the other hand, if we want the forecast to possess certain properties that are well measured by a certain proper scoring rule, it seems natural to also consider this same scoring rule as a loss function when estimating the forecast. This line of thinking is increasingly being used in machine learning, in particular in applications such as meteorology where there is a long tradition for forecast evaluation with proper scoring rules. We will cover the foundations of forecast evaluation with a focus on proper scoring rules and related evaluation metrics, discuss some recent developments and, in particular, connections to the training of machine learning algorithms.

https://www.ucl.ac.uk/mathematical-statistical-sciences/events/2025/jun/imss-annual-lecture-workshop-evaluating-forecasts-and-training-forecast-models

Attendee CategoryCost   
Non-UCL Attendees.£25.00

How would you rate your experience today?

How can we contact you?

What could we do better?

   Change Code