So, what are you using to get the battle target vector? If you're just setting it to the position of the BattleTarget object, then you might as well just parent it and save a process.
The Anchor being moved to the Battle Target might end up being a clunky method, the behavior of the agent is vastly changed when you adjust the minimum distance to the Anchor during combat and I'm thinking of ways to avoid doing that or using an array system to gather nearby obstacles to weigh in a safe direction vector and also weigh that against a projected path of the Battle Target.. So theres some advanced things that need to happen in the brain, I think, to make the system more stable than it is now.
The idea is to let Brain just collect data and decide what 'state' the entity should be in - ie Wander, Combat, Flee, Follow, etc... Then in those states its sending downstream events to the other FSM's to activate certain paths of actions to keep it all modular. That'll allow us to build everything separate, define conditions in the brain, then send messages to the other FSM's about doing specific things. I think ideally the Brain would just be collecting data, deciding what state to be in or deciding what things need to be done and just telling the other FSM's to do it and they'll have all of the dirty vectors and such down in those FSM's.
Here's a good theory reference for the weight ideas and array. I think without adding some context for the brain to consider then the AI is always going to be a little dumb and non-persistent in the big picture.
http://arges-systems.com/blog/2012/07/03/unitysteer-updates-tospherical-obstacle-avoidance/