Overheard at the
Editor's Note: Frank has already written two articles in this series on the Payola variant: one way back in the Fall 1997 Movement issue, and another in the Winter 2003 Adjustment issue. Here he continues the series with a detailed look at the mathematics behind Payola bidding tactics.
The Tortoise and Achilles are passing the time of day in their prison on a desert island location so obscure that it doesn't even appear as a supply centre on the Imperial map. Top secret documents describe them as evil masterminds, deadly terrorists apprehended when quick-witted Homeland Security agents intercepted their email exchanges planning an attack on England. They referred to their plot by the code name "Sea Lion".
Achilles: How do we get out of here? The food is abysmal. I don't have my heel guard with me and you're almost all out of turtle wax!
Tortoise: Well, something tells me we're going to be here for a while. We might as well amuse ourselves. I've a challenge for you.
Achilles: Oh ho! Let's see!
Tortoise: Here it is:
Fig 1: White to move and win
Achilles: Aaaiii!! You've been locked up too long, my friend. Don't you know I'm the world champion of Venetian-blind-crystal ball-exchange-zero sum diplomacy? I ask you, is that a problem for me?
Tortoise: Too hard? Well, all right. Try this one:
Fig 2: White's move, Black to win
Tortoise: The answer, of course, is two and a half. Yes, that's right. Black can force a win if (and only if) he has more than two and a half times as much money as white.
The position in Figure 2 is taken from a game of Tic-Tac-Toyola, a toy domain invented to study Payola bidding. A toy domain is a simplified version of some object of study. A well-chosen toy domain is simple enough to allow a solution, while retaining enough fidelity to allow generalisation back to the original problem. Let's take a closer look at the Tortoise's challenge:
How can player O (hereafter called 'Naughty') hope to stop player X (a.k.a. Crosser) in the position shown in figure 2? He couldn't, if he were playing the familiar childhood game of Naughts and Crosses. But here a layer has been added, so that player control over 'their' units is only indirect. Welcome to the world of Tic-Tac-Toyola, whose rules are quite simple:
The Tortoise's challenge was to find the cut-off ratio, b* corresponding to the situation in figure 2. The cut-off ratio is the lowest number such that Naughty can force a win whenever his account balance exceeds b* times Crosser's account balance. If the funding ratio lies below b*, then Crosser can force his way to victory.
A helpful tool for visualising the possibilities of play is a Game Tree. The game tree is made up of "nodes" connected in a mother-daughter relationship. (A node can have many daughters but only one mother.) One node, known as the "root", is unique in being motherless. (A consequence of these mixed metaphors is that the root node is conventionally shown at the top of the game tree).
The game tree provides an inventory of all possible game sequences. The root corresponds to the initial game position. Each daughter node corresponds to a position arising from a distinctive entry on the next turn. The top two levels of our game tree are given in figure 3:
Fig 3: First two levels of the Game Tree
Daughter nodes are computed until they correspond to a won position. (The middle daughter is such a case.) Returning to the tree analogy, such daughterless nodes are called Leaves. As a practical matter, we also designate any position from which one side is guaranteed victory as a leaf. An example is shown in figure 4:
Fig 4: An Honorary O-Leaf
Even though two spaces are still open, Naughty is certain of a win, as Crosser has no hope of getting three Xs in a row.
We will show that the game tree geometry contains "map" and "unit placement" information, as well as encoding the cut-off ratio and each player's optimal bidding strategy at each node.
First, we would like to show that a cut-off ratio really exists: It should be clear that a player with an overwhelming financial advantage can guarantee a successful outcome. If, for instance, there were M open places remaining, and one player were richer by a factor in excess of M, then that richer player could force his will by bidding at each turn slightly in excess of his opponent's account balance. But how do we know there isn't some mid range of relative financial strengths for which neither player would be able to force the outcome?
Suppose that B is the lowest value of account balance ratio that lets Crosser force a win. Then, for any ratio account balance ratio less than B there is some sequence of bids that Naughty could use to deny Crosser victory. In other words, once the account balance ratio falls below the threshold, Naughty is in a position to force the win himself. This guarantees the existence of the cut-off ratio (henceforth known as COR). Thus, a player's ability to win is determined by his relative financial strength as compared to the COR value. Further, the set of COR values (one for each node) determines the optimal bid strategy, since they tell him how much he can afford to bid to drive the game in an attractive direction without running afoul of the COR criterion applicable to the daughter node.
We wish to develop a procedure for computing the CORs. Without loss of generality, we can simplify the computation by limiting attention to just a portion of the game tree. Any (non-final) game position will have multiple continuations for its next move, each corresponding to a distinct daughter node. We arrange these daughter nodes in order of increasing CORs. The node with the largest COR corresponding to the game continuation for which Naughty will have the hardest time winning (i.e. he must have the greatest financial advantage in order to win). Clearly, Crosser will bid on this outcome and Naughty will bid on the daughter node with the lowest COR.
In addition to the two nodes mentioned, there may be other daughter nodes with intermediate COR values. Such nodes correspond to situations of intermediate attractiveness for both players. Clearly, neither player should place his top bid on such an intermediate situation. But perhaps smaller bids from both players could combine to select an intermediate COR node? In fact, we can neglect this possibility. Each player would only bid on an intermediate node if the resulting outcome (position plus financial strength) is at least as attractive for him as the outcome of the respective extreme COR case. The suggestion that Crosser and Naughty could team up to produce a situation that would be superior for each of them is untenable. (Two person games are zero sum, meaning they admit no sensible possibility for cooperation). While a joint bid could conceivably lead to an outcome without loss of attractiveness to each player, these cases can safely be ignored, as they have no bearing on determining the game winner.
Consequently, we restrict the game tree to include just maximum and minimum COR daughters of each node, limiting study to the pruned binary game tree in figure 5:
Fig 5: Pruned Game Tree
Some conventions are introduced to simplify discussion of the tree. Left daughters always represent Crosser's favoured continuation, right daughters the continuation most attractive to Naughty. Each node is numbered distinctly, with the daughter of node N being labelled 2N (left) and 2N + 1 (right). Node 1 is assigned to the game starting position (root).
Nodes 2, 6, 15, 28 and 29 are all leaves. Green leaves correspond to wins for Crosser, while yellow leaves represent Naughty victories.
How can we compute the COR values? First we derive a relationship between the CORs of a node and its two daughters. Then, we make use of the special situation at the leaves, whose COR values are self-evident, as our starting point for climbing the tree.
Suppose we have arrived at some node N, and the account balances are Naughty: L and Crosser: L * bN, where bN is the COR value for node N. Now Crosser would like to move forward to node 2N and is willing to bid up to an amount Dx, where Dx is chosen so that the ratio of Crosser to Naughty account balances at node 2N [in other words L * bN - Dx divided by L] would be equal b2N. Naughty, on the other hand, would like to drive the game toward node 2N+1, and is likewise ready to bid up to an amount Do, chosen so that the ratio of Crosser to Naughty account balances at node 2N+1 [in other words, L * bN divided by L - Do] would be equal to b2N+1. Since we are examining a cut-off condition, the two bids (D values) must be equal. Combining these results gives a the formula relating bN to the COR values at the daughter nodes:
bN = b2N+1 * (1 + b2N / (1 + b2N+1)
Now, what are the COR values at the lowest rung (leaves)? Since Crosser would be ready to run down his account balance to zero in order to reach a left (green) leaf, we can assign a COR value of 0 to these nodes. Likewise, Naughty's willingness to empty his account to reach a yellow node, means that we should assign these nodes a COR value of infinity. Figure 6 shows the pruned game tree with COR values and each player's optimal bidding level, expressed in terms of a percentage of his remaining account balance.
Fig 6: Cut-off Ratios and Optimal Bidding Levels
Fig 6: Cut-off Ratios and Optimal Bidding Levels
Two rules determine the optimal bids for each player:
Generally a player will not know his opponent's account balance, and so he will be unaware of his financial position with respect to the COR values. That is not an obstacle to our analysis, as the optimal bidding procedure does not require any knowledge the opponent's account balance. By allocating capital at each turn in accordance with the C and N prescriptions in the game tree, a player can be guaranteed to win, insofar as a win is achievable.
The working of the optimal bidding prescription can be easily grasped: Imagine that the initial ratio of funding corresponded to the COR. The bidding prescription would then qualify to produce a draw at each turn against optimal bidding by his opponent. But what if a player were funded in excess of the COR? In this case following the bidding prescription maintains a positive increment over COR at each daughter node. This advantage allows the player with superior financing (relative to the COR) to navigate to a winning leaf.
Before applying results to Payola, it is useful to recapitulate what we've learnt. One of the first things to strike the eye is the very different feel of a game record based on game tree geometry and a more conventional record, for instance, a 3x3 matrix annotated with the order of moves. Whereas the conventional presentation emphasises a time axis, the annotated game tree presents an all-at-once (non-sequential) encapsulation of the entire range of possible play developments.
A quick scan of the annotated game tree reveals the most critical moves. These are the nodes with high C and/or N values. In contrast, if the outcome of a turn were immaterial, then the two daughter nodes would have identical CORs, and the mother node would be annotated with N=C=0. Otherwise stated: at nodes where left and right daughter COR values are similar, there is little difference between outcomes and the optimal bidding will be low.
The Tic-Tac-Toyola analysis provides a procedure for preparing optimal bids without the need to estimate his opponent's account balance or to anticipate an opponent's bids. Rather than engaging in a battle of wits to marginally outbid an opponent, players treat their bid as a judiciously chosen threshold, and are equanimous as to whether their bid is accepted. An accepted bid advances his position; while a rejected bid promises an equivalent improvement in his relative financial strength.
Achilles: Well that's all very interesting, Mr. T, but I'm afraid I don't understand why you call it Tic-Tac-Toyola. Wasn't it supposed to show something about Payola? Or perhaps there is some connection with small Japanese automobiles?
Tortoise: Not at all, my dear Achilles. The article aims at helping Payolaphiles to put their bidding on scientific footing.
Achilles: But how? You've leaned heavily on Tic-Tac-Toyola being a two-person game. A zero sum game, you called it. Or maybe you've invented a new 2-player Payola variant?
Tortoise: No, that's not right either. A two-player variant would hardly be Payola. The author created Tic-Tac-Toyola (I assume you are know we are mere characters created as conduits for the author's erudite speculation) because he thought it had something to teach about Payola.
Achilles: Characters in a story? Heavens, Mr. Tortoise, don't unnerve me! I'm still trying to come to terms with life on this desert island. Besides, I noticed you didn't deal with my objection. You wouldn't be trying to confuse me, would you?
Tortoise: Trying to confuse you? Would a Payola player stoop so low? But as far as your objections go...
You are right that in looking at Tic-Tac-Toyola we relied heavily on its zero-sum character. Multiparty games (like Payola and diplomacy) are very different. For one thing: speed matters. It is no good having an optimal algorithm that could guarantee victory after 10 game years if another player, by taking calculated risks, gets there in 6.
And then, multiparty games, like Payola, are social endeavours with a very strong non-zero-sum character between pairs of players. It's fair to say that success comes largely from a player's "teamability" as well as his ability to implement his agenda within his team. So while judicious capital allocation is certainly important, it is most definitely not a magic key to victory. Every experienced player knows this.
Finally, even in a Payola game between just two players (for instance, a variant where units of "third" powers act according to bids from the two active players; or perhaps a "normal" game where two players, each with 17 centres, face off - how exciting!), there is an important practical problem with modelling the full range of play in a game tree: the number of nodes simply grows too quickly to manage.
Achilles (Thinking this through, frowning): .. That's well said, Mr. Tortoise. But then I really don't understand the point of Tic-Tac-Toyology.
Tortoise: We can use Tic-Tac-Toyology, as you aptly call it, because most tactical encounters in Payola are localised 1-on-1 operations, which can be practically and effectively treated as time limited engagements. Once we separate out strategic questions, such as who to fight, what worth to assign to particular SCs and how to much to value one's own savings compared to the damage inflicted to an opponent's account, insights from Tic-Tac-Toyola can help to guide our bidding.
Furthermore, even though Tic-Tac-Toyology can't be enlisted as the driver for strategic decisions, we will see that it can play the role of a valuable strategic consultant. We'll see that Tic-Tac-Toyola concepts are exportable into Payola: the game tree, N- and C-values, and the cut-off ratio each remain meaningful concepts that can be harnessed to improve Payola play.
At this point, I think some examples may be helpful:
We begin with a very simple situation. Consider the case of Italy trying to maintain ownership of Tunisia against a French fleet located in North Africa before the Spring turn. We assume there aren't any other units near enough to influence play. We set our time horizon at one year and assume that keeping Tunis is worth 20 AgP to Italy, and that Italy reckons French spending as important as his own. Now we ask: how much should Italy bid to control the fleet in Spring and Fall?
We see that Italy will retain Tunisia if he can control the French fleet either Spring or Fall. (In the Spring he will seek to order the fleet out of reach). It follows that our operations tree has three leaves:
We've simplified by assuming that France's cost for outbidding Italy is the amount of the Italian bid. This is only strictly true when the game is played according to the Ebayola rule. With normal Payola bidding, the positive terms in the cost functions represent minimum possible values. (We'll produce a more general treatment in a later example.)
How does Italy go about choosing the A values (his Spring and Fall bids)? When we examined Tic-Tac-Toyola we saw that the players bidding levels were carefully chosen so that the player was indifferent as to which branch of the game tree was being followed. (Bids were adjusted to remove any branch preferences). The same applies here: the cost functions for all three cases should be identical. This means Italy should bid 5 AgP in the Spring and, if that bid was not successful, follow up with 10 AgP in the Fall. For this case, where Italy seeks to control a unit either in Spring or Fall, he should plan for his Fall spending to double his Spring bid.
This prescription works for the case where a unit has to be controlled either during Spring or Fall. What about cases where a unit has to be controlled in both Spring and Fall? This is, in fact, the situation encountered by France in the above example. Consequently, the same operation tree applies. The 2:1 Fall to Spring optimal bidding ratio applies as well, so that Tun ownership goes to the player assigning the higher evaluation for this centre. This is easily verified by repeating the above calculation, where cost functions for the three cases are: - B1 - B2 + S (control in Spring and Fall and so take Tunis); - B1 + B2 (control in Spring but not in Fall), and B1 (failure to control in Spring).
Of course, most tactical situations are more complicated than the case above. We'd like to demonstrate the solution process for a multiple units scenario. Consider the situation in figure 7. It is pre-Spring, with a French fleet in Pie. Rom and Mar are both owned by Italy. This time Italy is not able to displace the French fleet out of range in the Spring. Yet, Italy still has a preference for the Springtime outcome: Italy would rather see France enter the Fall season with fleet in Tus than in Mar, as the Italian army provides an extra defensive option for protecting Rom in the Fall. How much should Italy allocate to draw Pie to Tus in the Spring and how much for Fall-time defence?
Fig 7: Italy prefers to draw the French fleet to Tuscany in the Spring, as that creates an extra opportunity for defence
There are four cases to consider. Seen from Italy's perspective:
Here we have introduced two new parameters: the spending satisfaction ratio: aIF and the overbid factor: bF.
A player benefits from each AgP by which an enemy's account is reduced. Yet reduction in an enemy's account is not generally equivalent to the same amount added to one's own account. The spending satisfaction ratio models this distinction; aIF being defined as the number of AgPs Italy can spend for each spent by France so that Italy considers himself to have broken even (apart from any net exchange of supply centres). Note that aXY is not a derived from the tactical position. Rather, it is determined from strategic considerations.
The overbid factor accounts for the fact that the cost for outbidding an opponent is generally some factor greater than unity above the opponent's bid. By indexing bF, we account for variations in players' ability to read their opponents.
The next two cases the French fleet is ordered to Tus in the Spring: Italy can frustrate France's attempt at capturing Rom by controlling either the French fleet or the Italian army, and can distribute his Fall spending allocation for this operation arbitrarily between the two units. Here we consider two sub-cases: either France bids twice as much as Italy in the Fall to be certain to gain control of both units; or France spends nothing in the Fall and consequently fails to capture Rom.
Of course, these sub-cases are only the two most extreme among a continuum of possibilities. It is also possible that France might fail to control one of the army/fleet pair and still bear the cost for a successful bid on the remaining unit. Or France might gain control of both units at a price only slightly higher than Italy's bid. However, to make a fair comparison, we must compare like with like: The end-of-year outcome for cases 1 and 2 correspond to completely determinant results. It is therefore appropriate to focus on these sub-cases.
Generalising slightly, if there were N units that could influence the outcome in Rom (the French fleet and N-1 other units), all of which must be controlled for France to make Rom, then France would be required to outspend Italy by a factor of N in order to capture the centre. Here N is assumed to be two or more.
Equating the four cost functions:
-aIFbFA1 - aIFbFA2 + SI
-aIFbFA1 + A2
A1 - NaIFbFB2 + SI
A1 + B2
If we take aIFbF = 1 and assume Italy values his SCs at 24 AgP, (reasonable values for many game situations), we would expect Italian bids of 2:F Pie - Tus (Spring) and, depending on the Spring outcome, either of 12>F Mar - Lyo, or 8 AgP arbitrarily distributed between A Nap - Rom and F Tus H (Fall).
We notice that for the purpose of optimal bid calculations, tactical situations are categorisable in terms of a tally of "and"s and "or"s applying to the number of units that need to be controlled. So, for instance, Italy, to defend his holdings, has to control the French fleet in the Fall, or control the French fleet in the Spring and control either the French fleet or the Italian army in the Fall. By interchanging all occurrences of "or"s and "and"s in the unit tally, we construct a description of the opponent's (France's) position. Both players are confronted with the same four cases, and France's cost functions correspond closely with their Italian equivalents:
X1+ X2 - SF
-aFIbIX1 + Y2 - SF
-aFIbIX1 - aFIbIY2/N
Here French bids of X1, X2, and Y2 compete against the Italian bids of A1, A2 and B2. We can calculate that the French bidding levels as:
Apparently, French bid levels closely resemble their Italian counterparts, with 1/N substituting for N. This substitution, following exchange of "and" and "or" in the unit tally, it is very evocative of a similar transformation between serial and parallel electrical networks. It strongly hints at a general procedure for computing optimal bid levels for arbitrarily complicated tactical encounters. (Care to try? A 10 AgP reward is hereby offered to the first person to send the correct solution to the Letters to the Editor).
If France chose the same S and ab values as Italy, he’d bid 2:F Pie - Mar in the Spring and (depending on the Springtime outcome) either 12:F Mar H or 8:F Tus - Rom and 8!A Nap - Rom in the Fall. We see that with ab set by both players to unity, the centre goes to whichever of the two players set the higher supply centre valuation.
We note that the presence of the defending Italian army makes it 40% more expensive for France to capture a supply centre than for Italy to defend: cF + SF and cI give the minimum differential spending for attack and defence, respectively. Thus, if France succeeds in capturing a centre, we can assume he has outspent Italy by at least cF + SF = 7/12 SF. In comparison, if Italy succeeded in defending his SCs, he will have outspent France by at least cI = 5/12 SI (always assuming one's opponent is bidding optimally).
With these formulas in hand we can examine how the results from a typical bidding calculation can be interpreted and applied. We begin by exploring the relevance of the cost functions parameters a and b: (for convenience we will sometimes suppress the indexes).
Remember that a12 was defined as the maximum number of AgPs player 1 can spend for each AgP spent by player 2, so that player 1 still feels satisfied with the outcome of a tactical engagement. This spending satisfaction ratio encodes country X's policy toward country Y:
In contrast to a; b, the overbid factor, is not an independent variable. b is 1 more than a player's average overage (the excess cost of a successful bid above the highest hostile bid) and is included to account for the fact that an outbidding opponent may encounter costs significantly above a player's own bid level. Players who are very good at reading their opponents may have a b only slightly greater than 1; whereas less careful and more eager bidders may have an overbid factor as high as 2, or even higher.
As a mediator of spending readiness, ab influences the outcome of tactical engagements, so that a lower level of contentiousness (i.e. a lower ab for both players) favours the player with the less pronounced spending requirement. Quantitatively; the minimum ratio SF*/SI* required for France to capture a supply centre in the previous example, increases from 1 to (N=)2 as ab drops from one to zero. (Here we have fixed ab for the two players to be equal).
Next, we’d like to reprise the notion of the cut-off ratio developed in Tic-Tac-Toyola. The COR was defined as a goal achievement threshold, so it is tempting to identify it with the ratio SF*/SI*. However, in addition to representing a threshold, Tic-Tac-Toyola's COR measured the attractiveness/difficulty of a position. (The lower the COR, the less exacting the financial requirements for victory). In Payola, the cost function is the natural measure of the difficulty of a position. The cost function is a key concept and bears detailed examination.
We have already cited the relationship between the cost function and optimal bidding. But the cost function also provides important input to strategic decisions. Most transparently: aggregating relevant changes in cost function values over multiple theatres provides one useful estimate of the incentive for changing (or maintaining) the diplomatic posture toward another player.
Cost function comparisons are also useful as a guide to resources allocation between (i.e. not just within) theatres. Theatre-to-theatre comparisons are furthermore useful input for AgP-free operations: For instance, deciding on whether to enter the Russian theatre via a retreat Gal-Ukr, or to engage Germany by retreating Gal-Sil. (Similar remarks apply for builds and disbands).
In principle, the cost function's usefulness as a common measure for comparing different theatres could equally be applied to extensions of Payola: for instance, to a variant in which a player managed a single current account used to control powers on multiple boards.
Finally, in analogy to the N- and C-values in the Tic-Tac-Toyola game tree, we’d like to say something about allocation of capital between Spring and Fall seasons. In the first (simple) case, we saw that each player does well to bid one third of the total funds he has allocated to a theatre in the Spring. This leaves him ready to `double up’ in the Fall.
What is the Fall to Spring spending ratio in the second (more complicated) scenario, and how is it affected by the appearance of ab? Typical values for ab between countries at war range from about 0,8 to about 1,3. Examining the bid levels in our sample problems shows that, within this range, ab has only very minor effect on the allocation of capital between Spring and Fall. For the second, more complicated, scenario, we can calculate a fairly generally reliable result by substituting the canonical value of 1 for ab. Then both A2/A1 and X2/X1 are given by the expression:
The Fall to Spring spending ratio starts at ¥ for N=1 (i.e. no Spring bidding if there is no unit to compete with the French fleet), decreasing then in the sequence 6, 4, 10/3,.. with addition of fixed defenders. In all such cases, the Fall to Spring spending ratio exceeds 2 (the value computed for the simple case).
Does the Tic-Tac-Toyology method really work for Payola and, if so, why hasn’t it made an appearance in the analyses of other games? The game tree concept is not new. It is a typical starting point for game analyses. However, for all but the simplest games, the game tree grows too rapidly to permit forward calculation to the leaves. Therefore, it not viable to use straightforward backward regression to evaluate prior game situations. Instead, one typically relies on heuristics. That is: evaluation of a game situation based on a set of positional and material characteristics. Heuristics are a collection of ad-hoc, experiential principles. They are by nature untestable except by performance comparison with other heuristics.
The Payola game tree grows very quickly. Yet we aspire to exact calculation, at least within the confines of the selected time horizon. Is it reasonable to entertain this hope and, if so, what special characteristics of Payola make the Tic-Tac-Toyology analogy viable? To answer these questions, I’ll briefly compare Payola with two other popular games: chess and go.
More effort has gone into automating chess into than to any other game, and impressive results have been attained. Today's machines play on par or better than the best human players. This has been accomplished through the use of powerful hardware as well as sophisticated (but still ad-hoc) heuristic routines mimicking human judgement. What is the potential for applying Tic-Tac-Toyology concepts to chess?
As it turns out, Payola presents a much more favourable domain than chess for Tic-Tac-Toyology modelling: The action radius for Payola's pieces is much more limited in both space and time. (The only theoretical exception, namely the long convoy, is for practical reasons quite rare in Payola.) It is therefore easy to divide up the board into isolated theatres, each with a clear unit participation catalogue. In effect, we replace the problem of modelling play on a circa 70 space map, with modelling play on about 7 maps, each with circa 10 spaces (a much easier problem) and then we may neglect some of the sub-maps.
Chess modelling is also complicated by indeterminacy in the time horizon: It is normal in chess for a series of series of forced or semi-forced moves to collapse a portion of the tree. In contrast, the very high level of unpredictability for all participants in each Payola turn strongly depreciates the value of look ahead, to the point where a 2- or 3-ply assessment is very close to perfect.
What about the game of go? Go superficially resembles Payola in that the direct action radius of each unit is 1 square. In fact, it might be realistic to apply Tic-Tac-Toyola principles to the early phase of a go game, or, better still, to goyola (players bid to control stone placement). We compute the cost functions in the various active theatres, then select the most auspicious move. This might seem an all the more promising approach, as go(yola) is pure tactics (according to our understanding of the word, in which strategy refers exclusively to multi-player aspects.)
Why is it then that no one has ever developed a program that could play a passable game of go? The critical obstacle to a computational solution for go is the potential explosive growth of each unit's effective action radius during the course of play. Whereas Payola remains a game of shifting, but nevertheless clearly distinct (i.e. no more than weakly coupled) tactical theatres, go theatres will merge, to the extent where a single endgame move can have a momentous impact on extremely remote localities. What's more, the range of given move can be difficult to characterise. These features get in the way of any program to decompose the game tree into a collection of computationally less demanding game saplings.
A long article merits a short conclusion. Keeping with a commendable tradition, I’ll frame it around a recapitulation, and begin with an important caveat. Lest anyone get the wrong idea: Tic-Tac-Toyology, standing on its own, is not a candidate for automating play. To mention just one limitation: Tic-Tac-Toyology has no facility for analysing recent player behaviour and weighing the results with other influences to decide on a diplomatic posture (i.e. the spending satisfaction ratios). This would be a task for a separate strategy module.
Tic-Tac-Toyology could be realised into an effective tactics module for managing local encounters. Better said, Tic-Tac-Toyology could provide a tactics module plus; serving as a valued consultant to a strategic module on specific questions, for instance, with costs assessments for various diplomatic postures. Further, a Tic-Tac-Toyology module could also play a major role in deciding questions half way between strategy and tactics. For instance: which of an enemy's SCs to target, and how hard to fight to defend each of one's own SCs. On a more modest level, even short of an implementation, Tic-Tac-Toyola concepts have potential for improving play.
If you wish to e-mail feedback on this article to the author, and clicking on the envelope above does not work for you, feel free to use the "Dear DP..." mail interface.