We are now past Labor Day and in the homestretch of the 2024 campaign, and a lot of people are asking me and others in political polling and media: Who’s going to win in November? Is the race Donald Trump’s to lose? Can Kamala Harris turn her momentum into victory?

With people craving this peek into the future, the spotlight is intensifying on a part of my industry that isn’t especially well understood: election forecasters and their predictive models. This work is somewhat different from the polls that we all know so well, and I want to lay out what matters more about election forecasting, some of the reasons their predictive models can yield such differences (like predicting better chances for Mr. Trump in some models and better chances for Ms. Harris in others) and what I think people should keep in mind about forecasting and models so they don’t drive themselves crazy trying to game out the future over the next nine weeks.

First, the difference between polling and forecasting (and predictive models) boils down to this: Polls give you a snapshot of voter opinion at a particular moment. By contrast, election forecasters try to look ahead and assess the likelihood of a particular outcome. Forecasters draw on those polls as they build a predictive model, into which they continually feed more polls and make adjustments (more on that below) to compute the chances of a given candidate winning. So while a poll might say that Ms. Harris is ahead of Mr. Trump by two percentage points in a given state — e.g., 49 percent to 47 percent — a predictive model might say that she won the presidency in 53 out of every 100 times that the model was run.

Some forecasters’ models think Mr. Trump is favored slightly, some think Ms. Harris is favored slightly, and some think of the current race as a true coin flip for either candidate. (On election night, my Times colleague Nate Cohn gets into the short-term forecasting game with the Needle.) Just as I find polling to be often misunderstood, election forecasting is even more complicated, making it even more likely that the results of a forecast will be badly misinterpreted.

There are considerable debates about what election forecasters should take into account. Think of an election model like a recipe for chocolate chip cookies. The goal is the same: produce the most accurate forecast of how an election will go (or produce the best chocolate chip cookie). But how you get there can vary significantly; New York Times Cooking has an editors’ collection of 16 chocolate chip cookie recipes, some requiring sea salt and one with coconut sugar.

In election forecasts, there’s the main ingredient, of course: polls. Some forecasters believe that an election model should be driven entirely by the results of public opinion polls, arguing that such polls are the only real window into how voters might behave and votes are the only metric that matters in the end. Some models might give more weight to polls with a track record of accuracy or polls conducted more recently. For instance, the Quinnipiac poll that my husband took last week has more weight in Nate Silver’s model than this more dated poll from Morning Consult but less weight than this fresh poll from Suffolk University.