Monday 26 October 2020

Blending online and offline tools for community engagement

This post was originally written for Commonplace and published in their blog on Tuesday 6th October 2020. You can view the full article here, and view two summary infographics below (when sharing the infographics, please cite the original source). 

Suggested citation: 

Hafferty, C. (2020) 'Blending online and offline community engagement' [Guest blog] Commonplace. 6th October. Available at: https://www.commonplace.is/blog/blending-online-and-offline-community-engagement

 ----------------------------------------------------------------------------

Online and offline engagement both have their merits. Online has become increasingly popular, particularly during lockdown, but what is the future of community engagement? PhD researcher Caitlin Hafferty shares her thoughts and recommends that a combination of both in a flexible, adaptable, and blended approach, is the best way to unlock the full potential of community engagement.

Overview

  • Community engagement can be complex. There are lots of online and offline tools available to enhance and facilitate participation.
  • Different tools have different merits and considerations associated with their use. These can vary depending on the purpose and context they are used in.
  • To help achieve a ‘best practice’ standard of community engagement in an increasingly digitised world, practitioners can adopt a 'blended approach' which embraces both online and offline techniques.

 



 

Friday 11 September 2020

Participation, Covid-19 and the ‘Digital explosion’ - are we heading towards a more ‘blended’ approach?

 


This blog post was written for Grasshopper Communications, and was originally published on their website on the 27th July 2020 (view the original post here).

Our work and home lives have changed significantly over the last few months – across the board, we’ve witnessed an explosion in the use of digital technology. For many of us, this has had a significant impact on the way that we conduct public and stakeholder engagement.

Engagement and participation can mean different things to different people.  Here, engagement is considered as a process where individuals, groups, and organisations are actively involved in making decisions that affect them.

This may involve engaging with specific interest groups, and/or the wider public. Extensive and inclusive community and stakeholder engagement are fundamental to project delivery in many key areas of work; including planning, development, implementation, decision-making, research, consultation, information provision, and policy.


Covid-19 and the digital ‘explosion’

Lockdown has resulted in planned and ongoing engagement activities being cancelled, postponed, and/or moved online. While using digital and online tools for engagement is not new, there has certainly been a noticeable increase in the use of these approaches as face-to-face contact has been restricted.

Over lockdown, different groups and organisations have been using a variety of virtual tools such as webinars (e.g. Zoom), online surveys, social media, and virtual exhibitions. The use of specialist online consultation platforms (such as Commonplace, which uses a holistic, inclusive, and innovative map-based approach to online engagement) have become more widely used. Other interactive web-based platforms for place-making and community engagement include EngagementHQ, Participatr and The Future Fox. A multitude of tools are often used (and combined) at different stages of the engagement process, and selected based on their appropriate use for different audiences and/or project outcomes.

There’s been a lively discussion around which tools are available; what works well, what doesn’t work, and areas for future innovation. Grasshopper Communications have reflected on this since lockdown began on their insight blog (also see their digital community engagement group on LinkedIn, which was set up to connect engagement professionals and share resources). A great way to stay up-to-date with digital engagement events and resources is Twitter, by following others and using relevant hashtags.


Engagement during lockdown: what can we learn to inform future practice?

COVID-19 has resulted in a huge shift in the way we use digital communication and offers extensive scope to drive forward change to community engagement around placemaking at a pace not seen before.

My PhD research aims to explore how digital tools help to improve engagement in planning and decision-making processes. By asking important questions about how we can engage with people in the most effective, fair, and inclusive ways possible, we can help keep important conversations going to inform strategies for the future.

My infographic “Considerations for digital engagement” summarises some key themes and important questions we can ask when developing engagement strategies in the future.  We need to think about:

  • Practical considerations for digital engagement; e.g. understanding what’s changed during the lockdown, what barriers exist to uptake, and important concerns such as privacy, security, and GDPR.

  • Ethical implications of using digital tools, and how this impacts the quality of the engagement process. This includes digital inclusions and exclusions, equality and power relations, and the ease of connecting and engaging with quiet, under-represented, or ‘hard-to-reach’ groups.

  • Future innovations and exploring whether there’s an optimum ‘blend’ of face-to-face and digital techniques. This includes considering how we can make well-informed choices regarding the most effective and inclusive approaches for different projects and audiences.

The lockdown provides a unique opportunity to understand the value and appropriate use of different digital engagement tools. We can consider people’s responses and attitudes towards different engagement approaches – do those involved (e.g. communities and key stakeholders) feel that engagement is a higher quality when online, or in-person? It’s useful to think about how we use different tools, their impact on the engagement process, and how these choices affect the knowledge produced.


 


Thursday 4 June 2020

Using automated transcription software for qualitative research: an example (part 2)


This blog post contains some examples of speech-to-text transcription apps which could be useful for qualitative researchers, e.g. to quickly transcribe and summarise meetings, interviews, or conversations. After an overview of some key features, I reflect on some key considerations for using these apps – for example the use of digital/automated methods and ethics, inclusion, and privacy. This is the second post in a 2-part introduction to using automated speech-to-text apps – see part 1 for an overview and background information.

In this post, I reflect on my experiences of using Otter.ai to record, take notes, and embed photos during the Talking Maps exhibition at the Weston Library in Oxford (I’ll write a separate blog post about this exhibition, as it was fantastic!). At this exhibition, we joined a large group as part of a guided tour with Stewart Ackland from the Map Department at the Bodleian Library. With permission, this tour was recorded using the Otter.ai app on my smartphone (Samsung Galaxy s10). 

Of course, you can do a lot of the following tasks manually (or by using Natural Language Processing features in a programming language/environment, NVivo, or similar). However, these in-built features in Otter.ai could be very useful for those who are new to automatic ways of transcribing, summarising, and displaying qualitative data (or would benefit from having these features in an accessible, engaging, and free mobile/computer app).

Automatic word frequencies



Otter.ai automatically finds key words, i.e. the most frequently mentioned words. It displays these as a list at the top of the transcription, once it has finished processing the conversation after recording. These words are ordered in terms of the frequency that they are mentioned, and you can click on any of these words to highlight it throughout the transcript. Otter can also generate a word cloud from these frequent words, with the size of the word proportional to its frequency.


Test title

Word clouds are by no means a sophisticated way to analyse text, however they do provide a quick, easy, and engaging way to see which words are most prevalent in your transcript. For example, the photos at the beginning of this blog (parts 1 and 2) are word clouds created from the text in the post - including words I've frequently used like 'transcription', 'otter.ai' and 'example' (see bottom of article for citation). In the word cloud above, you can see that our conversation at the map exhibition was (unsurprisingly!) about maps, and things that can be related to maps (country, area, land, ocean, world, Europe, people, etc.). It’s important to note that the transcript has been automatically cleaned so that common English words (e.g. “so”, “if”, “and”) have been removed for you (so they don’t affect the frequency of the words you might be most interested in).

Exploring word frequencies

In the previous example, you can see that ‘field’ was one of the key words in this conversation about maps. If you want to quickly find out more about this word, you can do this by clicking on the key word and it will highlight all those words in the transcript (like doing CTRL + f in a document). Let’s have a look at where ‘field’ is mentioned in the Talking Maps transcript.

 


When we navigate to the mentions of ‘field’ in the transcript, we can see that this is clustered around 17 minutes in – the exhibition guide is talking about a very interesting map from the 1600s, which depicts common agricultural practices at the time.

As you might be able to tell from the excerpt above, one downside to Otter.ai is that it transcribes almost everything that is being said. This can be an issue because naturally, humans tend to not always speak in coherent and flowing sentences and can change the direction of what they are saying mid-sentence (and pause, ‘umm’ and ‘err’ a lot). You can end up with a lot of repetition, breaks in sentences, and some sentences that don’t make sense. Therefore, it’s useful to listen to your recording as you edit (you can do this easily within the Otter.ai application, or elsewhere). When you edit your transcript in Otter.ai, it automatically realigns your text with the audio, which is useful. You can also see that it has highlighted the position of the word ‘field’ throughout your transcript along the time bar at the bottom, which makes it easy to skip to the word you are interested in.

Editing, photos, and speaker assignment

As I mentioned before, no transcription software perfectly understands every word that is being said – particularly if there are different accents, speeds and tones of speaking, multiple people trying to speak at once (as was the case with our conversation in the museum), or if acronyms and unusual place names are used. However, you can easily edit any mistakes while listening to the recording, before you export the file for further analysis. After time, Otter.ai will learn to pick up when you say some words, and you can also teach Otter names, words, acronyms, and phrases to improve the accuracy of the transcription (you can teach it up to 5 words for free, or 200 if you upgrade).

The examples below also show how you can easily integrate photos within the flow of text. This can be done by taking a photo on your smartphone, for example, while also recording on Otter.ai (on the mobile app). This is quite useful to refer to, so you know exactly what the speaker is referring to in the conversation (in this case, unsurprisingly, it’s maps again!). It's also a nice feature for researchers interested in mobile research methods (particularly those involving walking interviews, smartphones, and/or human-technology interactions), however background noise and the recording of multiple participants might be an issue here.


You might have noticed that in the pictures above, the person who is speaking is labelled as ‘Speaker 1’. At first, the speaker’s name was blank. Once I had labelled this, the computer will begin to scan through and automatically label ‘Speaker 1’ whenever it picks up that they are saying something. This is mostly accurate (ish), but you might want to double check by listening back through your recording. You can also save the names (or code names for) ‘suggested speakers’ in the Otter app. I’ve found this useful when recording regularly occurring meetings, for example those with my PhD supervisors.

Is there anything I should consider before using it?

Otter.ai is not 100% accurate and it might not be the best, most reliable (and most time or cost-effective) choice for everyone. Otter can struggle with recognising the voices of different speakers, picking up some accents, and is also quite limited to what languages it recognises (however this is something that the company is improving). It also requires a clear recording with little/no background noise and can struggle to transcribe multiple voices when people speak at once (however, it did work rather well for me in a museum with lots of people talking in the background!). Further to this, Otter can miss out quite a bit of punctuation (or, on the other hand, overuses punctuation and puts unexpected full stops in place of a natural pause), which requires further edits. Finally, particularly if you are using your mobile phone to record meetings and interviews, it is worth noting where the microphones are on your device to ensure that you can record two or more voices (e.g. most smartphones have mics on the top and bottom of the handset).

As with any digital research tool, you might want to critically evaluate the ways that technology includes (and excludes) individuals and groups of people. Ethics, inclusivity, and power relations are all important considerations here, including how this affects the knowledge produced by the research encounter. If you’re interested in digital research methods and ethics, this is a topic of interest in digital geographies, for example – the RGS-IBG Digital Geographies Research Group hosts and promotes some great events and resources. Considering the explosion of the use of digital tools during the coronavirus pandemic and social distancing measures, this LSE Impact Blog post outlines some practical and ethical considerations of carrying out qualitative research under lockdown (this Google Docs on ‘doing fieldwork in a pandemic’, edited by Deborah Lupton, also contains some excellent resources).

Importantly, the use of speech-to-text applications (including Otter.ai) for research purposes comes with important concerns regarding privacy and security. This is because sections of your recorded information could be used for training and quality testing purposes - see Otter.ai FAQs on “Is my data safe?” for more information on this, and view their full privacy policy here. It is important to carefully consider the privacy and security of any application or service you use for transcription, particularly if you are responsible for handling sensitive data. It is also important to think about how using apps like Otter.ai fit in with your institution’s GDPR and ethics guidelines, and/or the guidelines of the organisation you are collecting data for. As best practice, you should consider gaining informed consent from anyone you wish to record using Otter.ai (or similar apps). You should also think about whether your institution’s ethics council need to be aware that you are intending to use this method of recording and data storage.

Conclusion: are automated speech-to-text apps useful for qualitative research?

Automated speech-to-text applications have the potential to be incredibly useful, if used with consideration and for suitable applications. Apps like Otter.ai can save you a large amount of time by allowing a computer to perform the labour-intensive task of transcription for you. They can also help by identifying emerging themes, highlighting key words, embedding photographs, and visualising your text (such as word clouds).

However, speech-to-text apps are not 100% there yet in terms of the accuracy and reliability of transcription (thus require a certain level of manual editing after the transcript has been generated). However, some manual editing isn’t necessarily a bad thing, as listening through recordings again can help gain a better understanding of the data you have collected. As with many digital methods, these apps may also provoke concerns regarding the ethics, privacy, and security of data collection, processing, and storage.

In sum, artificial intelligence and machine learning in speech recognition has certainly come a long way, and apps like Otter.ai are getting there and will only continue to improve. Speech-to-text transcription is a very exciting and continuously developing area, with great potential to improve working conditions for social scientists and other researchers. I’d definitely recommend looking at Otter.ai and testing different speech-to-text transcription apps for yourself, to see what works best for you and your research!

Links to some useful resources:

Wordcloud blog title image created from the text in this article in R. R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.r-project.org/. The tm (vo.7-7; Feinerer & Hornik, 2019), readtext (v0.76; Benoit & Obeng, 2020), wordcloud2 (v0.2.2; Lang, 2020), RColorBrewer (v1.1-2; Neuwirth, 2014), wordcloud (v2.6; Fellows, 2018) packages were used.

Introduction to using automated transcription software for qualitative research (part 1)

test test test

If you’ve conducted interviews and meetings as part of your research (PhD or otherwise), you’ll likely have experienced the arduous task of transcribing from hours of pre-recorded audio. There are a lot of reasons why you might want to transcribe audio recordings: to generate interview transcripts, produce written and annotated meeting notes, or to transcribe videos and other recordings to make them more accessible to users. This blog post is the first of a two-part series in using transcription apps for qualitative research, which aims to provide a brief introduction to these technologies, their potential uses and benefits (see part 2 for a tutorial using a practical example).

Unless you outsource this task to a costly service, researchers often spend hours typing up interview data. While manual transcription (and listening to/reading through your transcripts) is arguably important, not least for the accuracy of interpretation and getting a ‘feel’ for your interviews, it is very time-consuming. If you’ve got the time to explore and test out different options for making the transcription process more efficient, then it can be rewarding and free-up time for other tasks. At the end of  this blog post, I’ve put together a list of 10 useful features of one speech-to-text app - Otter.ai. Transcription apps like Otter.ai have huge potential to transform this essential research task, for example this blog post considers what the development of these technologies in general might mean for analysing and interpreting qualitative data.

It is important to be clear there is no ‘right’ way to transcribe your data, and there are certainly a variety of approaches and tools you can use to make the process more efficient. In this blog post, I reflect on my personal experience of using (free) automated transcription applications to keep written records of interviews, meetings, and so forth, during my PhD research. While I focus on using free, automated transcription tools, there are a lot of great human and computer transcription services and applications out there. It’s worth having a look and weighing up the options yourself, to see what suits your needs and preferences.

What is automated speech-to-text transcription?

This is when a computer transcribes your interviews or meetings for you. Quite simply, all you need to do is have a clear audio recording with minimal background noise, and a state-of-the-art machine transcription service will convert your audio to text, almost instantly or in a matter of minutes.

Sound too good to be true? Speech recognition software has certainly come a long way in recent years, and now it is widely available at our fingertips. For example, smartphones and smart speakers easily recognise, process, and respond to our voice commands. We can also easily convert speech-to-text on most smartphones (however, these are often limited in terms of the length of time you can record for, and don’t have any of the additional in-built features as other apps). 

Image by Gerd Altmann from Pixabay

How does it work?

If you search the internet for ‘speech-to-text transcription tools’, you’ll likely see a lot of reference to artificial intelligence (AI) and machine learning, which are related to computer and data science. AI refers to intelligence that is demonstrated by machines, as opposed to humans or animals. Machine learning is the application of AI that enables computers to automatically learn and improve from experience, in a similar way that a human or animal might learn a new skill.

The field of interest here is called Natural Language Processing (NLP). NLP is a field of AI that gives computers the ability to read, understand, and derive meaning from human languages. It is through NLP that machines can process what humans are saying and make sense of the language in a way that is both meaningful and valuable to us. It is used for a variety of familiar applications, for example smart speakers and personal assistant applications (e.g. OK Google, Alexa, Siri, and Cortana), language translation (e.g. Google Translate), and spell checking in word documents (e.g. Microsoft Word, or in your emails).

What are the best free transcription tools? 

There are lots of helpful websites and articles which summarise the top free (and paid) transcription software in 2020. For example, this report and this article. It’s worth exploring the different options to find services and applications that best suit your research.

Over the last few months, I’ve been using an application called Otter.ai (see this Forbes article for more information), which is a web and mobile application that provides speech-to-text transcription. It’s also free to use for all its basic functions (you can pay a subscription fee for more storage space for transcriptions and some extra features, which I haven’t tried yet). Otter was trained with machine learning on millions of hours of audio recording, so that it can automatically transform audio to text with a pretty high degree of accuracy.

Recently, Otter.ai has also launched a new feature in partnership with Zoom, which allows you to record meetings in via the popular conference, meeting, and webinar platform. This lets you view, highlight, comment, and add photos to collaboratively make notes during team meetings (I also haven’t tried this yet, but I can think of a few ways that this would be useful for researchers, not least to improve accessibility and enhance the quality of meeting notes. There’s currently a 2-month free trial available, as of Spring 2020).

There’s no ‘perfect’ transcription application, but I’ve generally been quite impressed with Otter.ai - it certainly is useful and saves a lot of time. I’ve highlighted some of these useful features in part 2 of this blog post, alongside some reflections and important considerations when using these apps for research – not least regarding data security and privacy. The intention of these posts is to demonstrate some aspects of the utility of this sort of software for researchers engaged with qualitative research methods, using Otter.ai as an example.

What can I do with it?

Here are 10 things I like about Otter.ai:



I’m interested - can you provide any examples of how I can use it?

See part 2 of this blog post for annotated examples of some key features of Otter.ai, which have the potential to be useful for researchers to transcribe interviews, meeting notes, etc. I also highlight some things which require careful consideration when using the app, for example potential issues regarding accuracy, ethics, and privacy.

Links to some useful resources:

Wordcloud blog title image created from the text in this article in R. R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.r-project.org/. The tm (vo.7-7; Feinerer & Hornik, 2019), readtext (v0.76; Benoit & Obeng, 2020), wordcloud2 (v0.2.2; Lang, 2020), RColorBrewer (v1.1-2; Neuwirth, 2014), wordcloud (v2.6; Fellows, 2018) packages were used.

Blending online and offline tools for community engagement

This post was originally written for Commonplace and published in their blog on Tuesday 6th October 2020. You can view the full article her...