Stay consistent past March with a simple annual plan, monthly check-ins, and momentum resets backed by proven behavior-change coaching methods.
Some clients find it useful to map a full year's goals alongside quarterly milestones so there's a built-in review structure - would that kind of longer arc be helpful to work with?
Client completes the Goal Statement field for all three goals without difficulty. When they reach the Success Metric field, the entries are vague or absent: 'make progress,' 'improve significantly,' 'feel more confident.' The absence of a metric is not laziness — it is a symptom of goal ambiguity. A goal with no metric has no failure state, which means it also has no real commitment. When the Q1 milestone table is filled in, the entries match the level of vagueness in the metric field: 'continue working on it,' 'check in with myself.' The structure of the tracker reveals the ambiguity in the goal before the year is a quarter old.
Frame the Success Metric field as the test of the goal's clarity. 'The hardest field on this page is the Success Metric — not because the goal isn't real, but because a metric requires you to say what done looks like. If you can't define done, you can't tell whether you're making progress. We're going to spend most of our time in this session on that field.' The resistance from clients who avoid metrics is often that they find them reductive — 'my goals aren't the kind of thing you can measure.' Name it: 'The metric doesn't have to be a number. It can be a visible, observable change — something you and I could both agree has happened or hasn't. What would we both be able to see that would tell us this goal is achieved?'
Watch for the Success Metric field being completed with a description of the goal restated, not a measurement of it: 'I will have improved my leadership visibility' restates the goal; 'I will have presented at two cross-functional meetings by Q2' measures it. Also watch for the quarterly milestones being filled in as activities rather than outcomes: 'work on goal 2 this quarter' is an activity, not a milestone. A milestone should be something that either happened or did not happen at the end of the quarter — binary, not directional.
After all three goals are complete through the Goal Setting section, read the three Success Metrics back to the client and ask: 'For each of these — is this something we could both observe and agree on? Or is it still in the zone of impression?' The question creates a shared standard without requiring a purely quantitative metric. Then move to quarterly milestones and ask: 'Q1 is three months from now. What specifically will be true — not being worked on, but actually done — at the end of Q1 for each goal?' The debrief should produce binary Q1 milestones before the session ends.
Array
Client sets three solid goals in January with clear metrics and a coherent quarterly milestone map. In March, they are not sure where they left the tracker. By June, they cannot recall the quarterly milestones for Q2. The pattern is familiar: structure created at the beginning of the year dissolves without a scheduled rhythm to sustain it. The Monthly Check-In section of this tool is where most of the tracker's value lives — and it is also the section that requires the most behavioral change to use. Setting goals is a single event; tracking is a habit, and the client has not built it.
Frame the Monthly Check-In section as the part the client will need to schedule, not just complete. 'The Goal Setting and Quarterly Milestones sections take about thirty minutes once. The Monthly Check-In section is the one you come back to twelve times. Before we fill in the January column today, I want to set the recurring appointment — a specific day each month, at a specific time, that is protected for this. Without that, the check-in doesn't happen.' The resistance from clients who have tried tracking before is discouragement: they have built systems that worked briefly and dissolved. Name it: 'The check-in works because it is boring — same time, same day, same twelve questions every month. The goal is not to feel energized by it. It is to catch drift before it costs a quarter.'
Watch for the client completing the January check-in fields during the session and then not scheduling the February appointment before leaving. The instrument is filled in; the habit is not established. Also watch for the 'Obstacles Encountered' field in the monthly check-in being left blank even when the client reports being off track on a goal. Naming the obstacle in the moment it appears is the only time it can be addressed in the plan for the following month — a blank Obstacles field means the obstacle will reappear.
After completing the January check-in together, ask: 'What day this month will you complete the February check-in — specifically?' Then: 'When the Obstacles Encountered field is blank in a month where you're off track, what does that tell you about how you used the check-in?' The question is retrospective even though February hasn't happened yet — it primes the client to notice and name obstacles rather than skip the field. Close with: 'What would make it more likely that you open this again in thirty days than set it aside?'
Array
Client set goals in January — either with this tracker or through another process. It is now mid-year, and the Q1 and Q2 milestones have either been hit, partially hit, or quietly abandoned. The client wants a structured way to see where they stand and decide what to do with the second half of the year. The Annual Goal Tracker is used here as a recovery instrument rather than a planning one: the Goal Setting and Q1/Q2 milestone columns are populated with what was planned and what actually happened, and the Q3/Q4 columns are then set realistically based on that record.
Frame the mid-year use as more informative than a fresh January start. 'You have six months of data. We're going to use it. The Goal Setting section is a record of what you committed to in January — fill it in as accurately as you can remember. Then the Q1 and Q2 milestone columns get filled in honestly: what you planned and what actually happened. That record is more useful than starting fresh, because it shows us exactly where the gap opened.' The resistance from clients in this situation is embarrassment about the distance between January plans and June reality. Name it: 'The gap is not a verdict on you. It is information about what the plan was missing. We're using the tracker to read that information, not to score the first half of the year.'
Watch for the client revising January goals retroactively to match what actually happened — writing the goal as what they did rather than what they planned, which erases the gap rather than examining it. The Q1 and Q2 milestone fields should show both the planned milestone and what actually occurred, side by side. Also watch for the Q3 and Q4 milestones being set at the same level of ambition as the January milestones despite clear evidence from Q1/Q2 that those milestones were not achievable. A mid-year recalibration requires honesty about the rate of progress actually demonstrated, not the rate intended.
After the Goal Setting and Q1/Q2 columns are complete, ask: 'Looking at the gap between what you planned and what happened in each quarter — where did the slippage start? Was it the goal, the milestone, the obstacle, or the tracking?' The question looks for the structural failure point rather than assigning blame. Then: 'For Q3 and Q4, what milestone is realistically achievable given the pace of Q1 and Q2?' The debrief should produce Q3 milestones that are calibrated to actual demonstrated progress, not January optimism.
Array
My client keeps going back and forth on a decision and can't move forward
LifeI know what I need to do but I keep dropping things by end of day
LifeMy days feel reactive and I want to plan them with more intention





