Provide Feedback on Messengers Research

Do you have a comment or feedback on any of the research materials developed on messaging apps for development? Let us know! We value your feedback!

Hi, where can I find a description of the research and methods behind these case studies and the synthesis report?
I note a reference to “more than 50 interviews with development practitioners, digital development experts, technology providers and entrepreneurs” on pg 3, but haven’t found any more info.
Thanks.

Good question @cosgrovedent! Lets ping @Boris_Maguire for more details.

Boris?

Hi @cosgrovedent ! Thanks for your interest. You’re right. Unless cited otherwise, the case studies are predominantly based on interviews with 53 people (all of whom listed in the acknowledgements). More on how we approached those:

Following planning calls with DIAL, Echo started our research with a review of existing reporting (eg. your ICRC report, ICT Works, GSMA, Chatbot Magazine, etc.), as well as DIAL and Echo networks to identify broad sector experts. From the research and based on knowledge gathered about those experts, Echo developed interview templates with DIAL’s input. Echo reached out to the experts, ultimately interviewing about 12 people. each for 1-2 hours. Echo used this preliminary research and expert interviews those to achieve the following:

  1. Get a 30,000 ft view of the topic
  2. Identify spectrum of perspectives on the utility/application/promise of messengers for development
  3. Identify and connect to interesting organizations/projects

Once those 3 goals were achieved, Echo developed a more detailed interview template and connected to implementing organizations and social enterprises who were using or had used messengers. Preliminary interviews focused on the following:

  1. Organizational summary (type, size, revenue/biz model etc.)
  2. Project/Program summary (goals, budget, location, sector, structure)
  3. Platform/App (apps used, decision making behind app use, role/goal for app)
  4. Assessment: (overall project outcomes/results, specific impact of messaging app etc.)
    **App assessment focused specifically on:
    *UI/UX
    *Privacy
    *Cost
    *Technical support
    *Flexibility (APIs, integrations, features etc.)
  5. Organizations were asked what lessons could be drawn for their sector and contexts, and those generalizable to others.
  6. Next Steps (recommendations for additional resources, other projects to look at, recommendations to make reporting most useful)

In total Echo interviewed about 40 organizational representatives and project personnel, from which we narrowed our focus to about 15 cases. For each, we drafted internal project “one-pagers”, highlighting key facts and lessons for DIAL.

DIAL reviewed them and submitted a list of followup questions, which formed the basis of Echo’s follow up interviews with personnel from the 15 selected projects. In some cases these were conducted with the same personnel interviewed previously. In other cases we branched out and interviewed additional project stakeholders - those from other teams or partners on the project.

These second round interviews lead to a further narrowing of the focus to 14 projects that would be reviewed in the Project Catalog, with 6 selected for a deeper-dive case studies. The one-pagers were subsequently revised internally by Echo and DIAL into project summaries, then shared with the relevant external project personnel we’d interviewed to get their input on accuracy and responses to our analysis.

Meanwhile, Echo drafter the case studies, the process of which generated substantial lists of new clarifying questions for each project, both from DIAL and Echo. These resulted in a third round of interviews with personnel from the 6 cases. In some cases a fourth round was also conducted, and in all cases there was substantial and continuous email correspondence to clarify details and obtain internal project documentation.

This ongoing correspondence led to second and third drafts of each case study, the final versions of which were shared with project personnel for accuracy and in some cases to ensure that confidentiality was maintained where necessary.

When the case studies and Project Catalog were complete, Echo and and the DIAL reviewed them and the underlying research, then engaged for an in-depth discussion of key learnings and use cases we’d observed across the different cases and summaries. This led Echo to draft a final report outline, which provided an initial breakdown of common use cases and lessons learned.

We then drafted the report over the course of a few weeks and shared multiple early drafts with DIAL for input. The report structure and content were revamped multiple times over the course of about two months, including our framing of the uses cases and lessons. , A final draft was then shared internally with other DIAL experts and copy editors before being submitted for final design and publishing.


I hope that helps, but any other questions please let me know! Thanks again for your interest in the research.

Cheers,
Boris

Thanks Boris for the thorough description.

Just out of curiosity, was there a reason you didn’t include this information in the actual report? Is it because you don’t expect it to be of broad interest?

For what it’s worth I think there’s substantive value in including this type of information, to the extent that it really does help to evaluate and consider what sense to make of the findings. For example, the below is super helpful, but I’d also love to better understand how the case studies were selected from the long list. Understanding the questions and criteria used for this would help with assessing the generalizability of those cases, and understanding the larger universe of projects from which they distinguished themselves in one way or another. If the questions and criteria were only loosely defined or based more on gut instincts or what you thought would make a good case study, that’s fine too, but spelling it out explicitly makes the utility of the cases clear.

Publishing detailed notes on methods in these kinds of publications is also important for field-building. There are so many organizations with limited research expertise conducting research, that close descriptions can be helpful for modelling, learning, and improving practice generally. This is important because there’s a lot of unstructured research out there that could be a lot more rigorous and more useful. When a project like yours has as the resources and expertise it does, I think that sharing can make a difference.

In that spirit, I’ll close by noting how awesome a methods blogpost on the research process would be. I think there are several people who would appreciate reading about your experiences with developing and applying this procedure (where you got the idea for the research design, what methodological resources you had and wished you had, what you might have done differently).

Anyway, not to request lots of extra work, but you seem well positioned to provide a valuable public service, so thought I’d nudge. Thanks for all the hard work so far.

c

@Maurice, @cosgrovedent makes a good point. Do you think you could add the information @Boris_Maguire provided here as an appendix (e.g. “Methodology Details”) to the report and the website?

Yeah, definitely!