By the time Pat and Vanessa found a landlord willing to show them a basement suite, they had been on the streets for seven months. It took all of seven minutes for the landlord to reject them. Even if their credit score hadn’t been in the dumps, the fact that their rent money came from provincial and federal subsidies at two different times of the month made them a decidedly unattractive proposition. And so Pat and Vanessa found themselves back in the shelter system, moving night after night.
In policy jargon, Pat and Vanessa were experiencing the lack of co-terminosity. In regular speak, Pat and Vanessa were caught between two lumbering bureaucracies. There is a whole class of social policy problems (unintentionally) created by the organizations set-up to solve them. These might be called coordination, compliance, or communication problems. For example, a new benefit may be on offer, but citizens do not know they are eligible.
For this class of problems, it helps to take the perspective of the user. By understanding how regular people interface with their systems, policymakers can design a more sensible service journey and better align the backend to support a coherent experience. Simple tweaks can make a big difference. Instead of asking citizens to opt-in to a new benefit, for instance, they can ask citizens to opt-out of a new benefit. Increasingly, these ’user-centered design' and 'behavioral insights’ methods are making their way into the mainstream policy toolkit.
On a trip to Ottawa last week, we spent time with 4 of the 22 design labs that have sprung up within Canada’s federal government departments. We had organized the visit to exchange lessons learned between the social and public sectors. Five of us - representing the four non-profits and one provincial innovation fund InWithForward partners with - shared our experience trying to flip the order of the traditional policy process: starting with people to iterate new practice before creating the procurement and policy frameworks.
Changing business as usual seems to be high on the political agenda. Experimentation and innovation is interwoven into most ministerial mandate letters. Departments have been instructed to spend a percentage of their yearly budgets to “test new approaches to learn what works and what does not work using a rigorous method.” While the risk is that departments will repackage existing initiatives as experimentation, the opportunity is to incentivize and legitimize new forms of problem-solving and solution-making. We left pretty darn optimistic by the opportunity space.
Different problems require different tools
But - if there is anything we’ve gleaned from three years of on-the-ground experimentation, it’s that there are at least two other classes of social policy problems. These are what we’ll call outcome and implementation problems. And these problems require a different approach.
Take Pat and Vanessa again. For the past decade, they’ve been on-and-off the streets multiple times. They’re stuck in a loop: with the help of housing workers, they negotiate for an apartment; over time, grievances with their landlord build up, and eviction seems to be the only option. Swamped housing workers put their case on the back burner as soon as housing is secured. Placed in new neighborhoods each time, Pat and Vanessa have no local connections. Not surprisingly, they gravitate back towards the social networks and the dealers that they know. In their old haunts, they feel less stigmatized and more at home.
Now, this isn’t simply a bureaucratic misalignment that can be fixed with revised protocols, systems, or forms. It’s a deeper disconnect between policy intent and program implementation: What constitutes a good outcome? What’s the role & responsibilities of a housing worker versus a landlord versus a tenant versus a neighbor? What’s the capacity of the housing organization and the surrounding community to collectively act?
Neighbourhood ties aren’t an outcome government or housing organizations regularly track. The focus is on the placement rate and the wait list. With such demand for housing, workers feel under real pressure to move people off their lists. But, even if the outcomes and accountabilities were to shift, human resourcing would need to follow suit. Interacting with landlords and negotiating leases is quite a different skill set than brokering people to informal relationships and building neighborly connections.
What we’re learning is that these kinds of policy problems are unlikely to be resolved with a revised journey map or evidenced intervention. Indeed, it’s less about finding an elegant solution and more about investing in the cultures in which the solutions grow: the values, intents, routines, and accountabilities.
That’s certainly been our ah-ha in British Columbia with our partners Burnaby Association for Community Inclusion, posAbilities, and Simon Fraser Society for Community Living. We started by using user-centered design and behavioral insights to make and test a new service model with developmentally disabled adults called Kudoz. But, we’re finding that its success comes down to the distinctive culture which feeds the model. That’s a culture predicated on some different values and some different ways of surfacing assumptions, organizing and managing work, attracting and supporting talent, collecting and acting on data, and telling stories. And yet keeping a culture distinctive is difficult. Most cultures have semi-porous boundaries; they are shaped by the dominant institutions, by histories, by geographies.
The sustenance challenge
Innovation, then, isn’t just a front-end challenge. It’s not just about coming up with, testing, and evidencing interventions. It’s about developing and constantly refreshing the practices that bring about alternative solutions. Given this, our message to colleagues in Ottawa is to invest in the organizational capacities that lead to continuous experimentation - not just to invest in experimental projects.
Unlike the private sector, the social sector has no Research & Development infrastructure. Quite the opposite: most social organizations were designed to consistently deliver contracted services. Project monies don’t address this infrastructure gap. If anything, the pressure is to reduce administrative spends and direct maximum dollars to programs. Without a permanent R&D function, social organizations simply don’t have the resources to attract, manage, and retain the kind of talent required for sustained experimentation.
Over the next two years, we are building a shared R&D department between 7 to 10 social organizations. We want to test a ‘cluster’ approach to sustained experimentation. What if social organizations pooled a percentage of their budgets for a joint innovation team? This joint team would have the capacity to address outcome & implementation problems, whilst coaching staff within each organization to address coordination & compliance problems.
Whilst our vision is in its early stages, we’re eager to explore how service organizations, advocacy organizations, regulatory bodies, governments, and foundations might enable this kind of continuous innovation.
We wonder how, together, we might ...
Shift the relational basis
Granting and contracting processes between government and social service organizations tend to be predicated on mistrust. The fear of ‘wasting’ or ‘misdirecting’ public monies forges a compliance relationship - not a creative one. Organizations must account for their spends against preset targets and indicators. What if trust and mutuality were to become the basis of the relationship? How might grants & contracts position government and social service organizations as co-learners and co-creators?
Use small data
Experimentation requires meaningful data. And not just data that identifies a problem, but data that sheds light on the nature of the solutions. This is what we call small data; data thick with details and nuance, collected in context, and layered over time. Trouble is, most social organizations don’t have access to these sorts of insights. They might know service usage, delivery volumes, and if they’re lucky, user and staff satisfaction numbers. But, they’re unlikely to know who isn’t using or benefiting from their services, why, and what might be more engaging for whom. What if there was a common repository of small data and ways for organizations to commission data about missing population groups?
When venture capitalists decide to invest in a start-up, they aren’t just looking at the strength of the idea. They are looking at the strength of the team to pull off the idea and pivot as needed. Alongside cash investments tends to come investments in senior leadership and board capacity. By contrast, social sector organizations and funders tend to put their resources into the new project idea, without putting equivalent resources into developing the underlying team. What if there were capacity grants alongside project grants, which allowed organizations to recruit and retain the necessary talent? What if Canada set-up talent spotting & matching programs for the social sector like the Peace Corps (see AmeriCorps) or Teach For America?
There’s a deeply held assumption that innovation is a one-time investment. Invest upfront in a new intervention, and then scale that intervention to other communities. In our experience, it is deep participation in the design of the intervention that yields buy-in, engagement, and ultimately change. Rather than spread the intervention, then, it’s about spreading the enabling conditions. What if we identified the particularities and similarities of each community and host organization, and enrolled people and professionals in a deliberative readiness and capacity building process?
Measure learning routines
What you measure is what you get. Organizations very rationally align their operations to match what is counted. The push to shift from measuring outputs to outcomes comes from a well intentioned desire to focus on what matters. Trouble is, measuring outcomes can have a similarly myopic effect. For instance, when organizations have to report on the number of people they’ve housed, quantity is still likely to override quality placements. What if there wasn’t just a focus on numeric results, but on the routines and practices that generate results? If organizations reported on their research questions, their use of data, and the number of shifts in their practice, how might that incentivize constant learning and sustained rigor?