Targeting is a crucial feature for any feature management service. It’s used to specify which users see what flag variation. In this RFC, we will explore relevant data models and algorithms required to build an efficient targeting service. We will model a system that is minimal at its core but can be used to compose complex rulesets and evaluated efficiently (O(n)
~ where n
is the number of targeting rules).
Data Models
A targeting object represents a set of rules that need to be evaluated, which outputs the best-fit variation for a particular flag. Rulesets need to be evaluated in priority order. The first rule to be satisfied will dictate the resulting variation.
Notes:
Targeting rules are only applied if it is enabled, otherwise, the flag is evaluated to the fallthrough variation (with weighting not applied).
You can’t create a single rule to target multiple identities (users) or segments. However, you can create a new rule for every identity/segment you want to target.
Each targeting rule can be weighted. The fallthrough variation can also be weighted.
Evaluation Engine
The ruleset evaluation engine essentially selects the variation in which the first rule is satisfied. It can be defined recursively in 4 lines. Note that the following below is just pseudo-code aimed to illustrate the core logic that is used for flag evaluation.
evaluate(ruleset, fallthrough_variation) :: (Ruleset, Variation) -> Variation if (ruleset is empty) return fallthrough_variation cons {condition, identity, segement, variation} = ruleset.pop() if (<condition> matches <identity | segment>) then return variation return evaluate(ruleset, fallthrough_variation)
We’ll explore the serialisation and deserialisation of targeting rulesets, in greater depth, in another RFC. In order to efficiently transport rulesets to our server-side SDKs and evaluate them locally, the transport mechanism would need to ensure the ruleset order is maintained during serialisation.