A significant problem for integrating acting and planning is how to maintain consistency between the planner’s descriptive action models, which abstractly describe what the actions do, and the actor’s operational models, which tell how to perform the actions with rich control structures for closed-loop online decision-making. Operational models allow for dealing with a variety of contexts and for responding to unexpected outcomes and events in a dynamically changing environment. To circumvent the consistency problem, we use the actor’s operational models both for acting and for planning. Our acting-and-planning algorithm, APE, uses hierarchical operational models inspired from those in the well-known PRS system. But unlike the reactive PRS algorithm, APE chooses its course of action using a planner that does Monte Carlo sampling over simulated executions. Our experiments with this approach show substantial benefits in the success rates of the acting system, in particular for domains with dead ends.