Transcript: Data, Evaluation, and Impact

 

This interview was originally recorded on December 10, 2021, as part of Leoni Consulting Group’s All Things Marketing and Education Podcast.

Access this episode's show notes, including links to the audio, a summary, and helpful resources.

Elana:
Hello, and welcome to All Things Marketing and Education. My name is Elana Leoni, and I’ve devoted my career to helping education brands build their brand awareness and engagement. Each week I sit down with educators, EdTech entrepreneurs, and the experts in educational marketing and community building. All of them will share their successes and failures using social media, inbound marketing or content marketing, and community building. I’m excited to guide you on your journey to transform your marketing efforts into something that provides consistent value, and ultimately improves the lives of your audience. Hi, everyone. Welcome to this week’s episode of All Things Marketing and Education.

Today I have the absolute pleasure to talk with Jason Torres Altman. Jason has devoted a lot of his career to looking at things that are slightly depressing in this world. [Laughs]. I would say he’s really been deep diving into the inequities of education. And in particular, he evaluates the inequities of education. He also partners with people like me. I feel very lucky to work alongside him to evaluate the efficacies of ed tech products and services, and education programs for our clients. So I’ve worked in the trenches with this man. I am so excited to share with you his genius, because I’ve learned so much about how I view education and the landscape, and even as it trickles into marketing and community as well. So now I’ve worked with Jason for over three years; I would say you are, Jason, one of the most detail-oriented, funny, and thoughtful human beings I’ve met. In a meeting about maybe a year ago, I introduced you, Jason – we were having some fun with a client – as an overall smart dude. And I am excited for this smart dude to talk about his passions and thoughts in education with all of you. So, welcome, Jason.


Jason:
Thank you very much. That’s a warm welcome. I believe you may have stretched the fender on that a little bit, but I do the best I can when I get a chance. And, I don’t know, if any of those things are true about me, I’ve done all right. That was very sweet of you to say all of those things. And yeah, podcast audience, it’s nice to chat with you, or have you listen to our chat – I’m not quite sure how you introduce yourself to the audience that way, but –


Elana:
I think it’s important to note that – Jason, this is probably your first podcast, right?


Jason:
It is my first podcast, yes.


Elana:
Sweet. The first.


Jason:
Yeah.


Elana:
Why don’t you tell the audience just a little bit about your background? How you got into what you do. I know you don’t like to brag, but you do have a lot of papers behind you, things that you’re passionate about in education. Why don’t you just talk a little bit about your background?


Jason:
Sure. Yeah, and we can be brief because the rest of the conversation will probably be far more stimulating, but this is kind of all I’ve done for my career of more than 20 years now. And my specific role in the environment of people trying to do things to make things better for students and teachers (and in parallel for me) in public schools, is to help folks who are doing something, usually something a little bit extra – sometimes it might be like an after-school or a tutoring thing with students. Very often, though, it’s something that’s happening with the teachers, some other kind of support, maybe an adult learning experience or something like that. So there’s something else going on, it’s not just the core curricula or whatever. And I support the people who are doing those things.

So I don’t do those things myself, but I work with them to understand a little bit more, or help them try to figure out more completely why they’re doing what they want to do, help them understand what they may be doing well, like the process-oriented stuff, and things they might want to change or try to do a little bit better. They call it, in the evaluation realm, fidelity of implementation. There’s always the dream or the picture of how it might look in a central office. A lot of the folks I support are doing a statewide program, so by the time you get into the 72nd different district doing something in a state, it might look a little different than the first. So we work with folks to understand that. And then of course everybody always wants to know, hey, are we making a difference, right? And so an evaluator who’s working in this space, which I am, ends up working with some of those outcomes type of data, now historically, right?

And classically, that often was the student achievement data, and maybe specifically data that’s coming out of a state testing environment. And there’s another whole podcast you could have, a conversation about the merits of that particular data, but it was always there for students between the ages of third grade and eighth grade, and once in high school, and it was there for every state, and it was done every year. But clearly we’ve moved past that, right? And there are other outcomes that folks might be interested in. Maybe it’s kind of simple, you know, in a school – are the students attending more? Is teacher wellness improved? Is classroom engagement stronger for the average student? Right? So we end up working with maybe surveys, interviews, any types of, I guess, standardized surveys that have been used before and been found to be valid and reliable.

We actually, along with a lot of – we work with a group of teachers, and they have an online community that they are a part of, and we just did a formal survey. It’s one that’s used all over the world within communities to measure the strength of that community. So sometimes it is a survey, but it’s something that’s a little bit more standardized. So, yeah, I end up being the one who might be looking at the Excel spreadsheet for a while, trying to prepare the data once we know where the numbers lie, in a way that would help the folks that we’re working with understand what might be going on, to help lead them to make their own interpretations. Provide a little, I guess, facilitation and support around trying to actually use the data, which is actually probably the most important part of this job. That you do something, provide support that is in the end actionable, and something can be done differently in the future, because you were involved. So that use is a big part of this job.

So, yeah, that’s a little bit about what I do. I have worked with folks in so many states. I live in New Orleans, I do have a local client or two, people just doing things in this city.


Elana:
So, Jason, given that you’ve done evaluation for states, for overall programs, organizations, and even ed tech companies, and you’ve been at a lot of evaluation industry conferences, and just been in different nooks and crannies of the space, what do you see as the most common misconceptions about evaluation?


Jason:
It’s a really good, easy question that might have maybe a little bit more complicated answer, because I think it depends who you’re looking at. So, for example, if you have a foundation that is giving funds to folks, and expecting results, outcomes in return, somebody in that – a misconception in that arena might be that this evaluation is going to prove that we made an impact. But very often those funds are only given for one year, it’s absolutely … It’s not likely in so many cases that any intervention or program that was started with this funding would be able to do anything more than the shortest of short-term outcomes, and be able to show that after one year. So a misconception on that side of things would believe the evaluator is the one who produces the proof that their money made a big difference in the world. Right?

On the programming side, so pretend that you’re a director for a statewide program for a second. Very often evaluation is something that actually isn’t a part of the programming, especially if somebody gets funding for the first time. Until they realize it’s required about a month before the report is due. So it’s not really a misconception, it’s that it’s underutilized and maybe ignored a little bit. And they miss out on the parts that we talked about earlier where they could’ve been really honing in on the why they’re doing things, or learning more about the way they’re doing things, what’s working for who, and when, and end up only really wanting to understand from somebody in my position whether or not the thing works. Right? But at that point, when you’ve gotten to that end, there isn’t so much that can be done any more. The report is produced and packaged, and sent off to a funder.

And then for the folks who are involved in the programming, I think there’s still – and it used to be very bad, and clearly – you mentioned equity before, right? In the spaces in which we provide support the programs, in all honesty, aren’t usually trying to produce more positive outcomes for white men. Right? [Laughs]. So as an evaluator who is white and male, and maybe picture yourself in a – maybe an urban public school setting, or something like that, and if there’s observations happening, or something like that, right? So the misconception that an educator in that case might have is that the evaluator is scary, and that they’re trying to do something to them.

Nobody really wants to be observed. Folks feel like the evaluation is actually of them, of their own teaching, or for a student, of their own learning. And so you always as an evaluator have to be really careful about how you set that lens, and have folks understand that you’re looking at a more expansive view of whether or not the program’s doing what it’s supposed to be doing, and what it’s doing best. So a short question, another long answer, but I think it really does – it just depends where folks are sitting in whatever the thing that’s happening in a school might be.


Elana:
Yeah, that’s helpful, and I think in the context of this conversation, I like to think of our audience in two different parts. One are educators on the ground who we’ve collaborated with together, and also ed tech organizations. So you might be a founder, you might be part of the executive team, you might be a marketer new to education, and you found this podcast. And so both those – I think everything you said actually somewhat applies, is that evaluation is somewhat done when it has to be done. It’s like the oh-my-gosh moment where it’s either required or you need something for funding. And we’ve also worked in projects where if the project or the product was designed with evaluative outcomes in mind, it would help them prioritize the product better. Am I saying that right, Jason?


Jason:
Yeah, I think so. And what happens for folks on the nonprofit side, if they’re trying to solicit funds through a proposal, for example, to a foundation, very often if the type of funding or the amount, the scope of funding is over a certain limit, maybe gets into $10,000 or more, or $50,000 or more that’s going to be exchanged in that particular delivery for that program, a lot of times the evaluation part is required. It’s mandatory that it be a part of the actual proposal itself. But folks don’t have months and months, and years and years to put these proposals together, so people kind of do the best they can, right?

And then just kind of jumping the fence over to the ed tech company view, like you have, people are super creative, they have really good ideas. You and I have worked for a tech company that it’s really a part of their mantra to move as fast as they can and be as creative as they can.

And so it’s interesting sometimes, there is a little bit of a tension sometimes with evaluation, which is a little bit more incremental. It doesn’t have to be slow, and there’s new – definitely a new wave of techniques that evaluators are using that allow the evaluation to provide the just-in-time, as-I-need-it-now types of information that are necessary to make data. But it does require sitting back and really documenting the reasons why you’re doing things. Is there some theory behind the actions that you anticipate having happen if you have a product that you think is going to be helpful for teachers in the classroom? It’s more than just believing that we made this thing, it’s super creative and fun, and we anticipate that all teachers will use it, and it’s going to improve student achievement across the board.


Elana:
Yeah, and really putting our evaluator hat on, it reminds me a lot of my design thinking days, because it’s really about putting the audience in front, and saying what is the data. What does the data tell us? It’s not about us or what we think, or our assumptions, and sometimes it’s part of the executive team or if we’re a founder. We’re emotionally attached to this product. We have designed this thing to improve this thing, and it’s going to do that. And if the data tells us different, they’re just not using it right, you know? So we get emotionally attached, and what you really want is to have evaluation take a seat at the table and say here’s what’s not working in all of this.

And I think that that is a hard, hard thing. But when you talk about theories of action and theories of change, do you want to just tell the audience a little bit about that exercise and how some organizations and companies use that to really hone them back to the why? Like at LCG, we work with people on bringing organizations back to their mission and their why, and having that trail out into communications. But how do you do that into the product, and making sure the impact is there?


Jason:
Yeah, well, that’s a hard question for me to probably answer just across the board, but I do want to actually step back, one step back, and mention – you had – you were talking about how folks, especially folks in leadership positions or who have decision making authority within whatever entity or organization, might be doing the thing that they think is going to help teachers, or think is going to help students. They are tied to their ideas. We are emotionally invested. Folks wouldn’t do those jobs, folks wouldn’t be at the place where they are – especially if you consider the nonprofit sector. These are not like super high-paying jobs where people tell you thank you very often, right? So people really believe in what they’re invested in. It’s actually something with nonprofits – I think it’s called founder’s syndrome, how sometimes someone just actually needs to leave before the program really flourishes. But that’s natural, and that’s wonderful.

And actually, Michael Quinn Patton, who is one of the foremost evaluators still, in the country – he’s older and nearing retirement age, but was actually one of the first evaluators, because it’s a fairly new field. He also kind of dabbles with cartooning. So this is one of the people who does evaluation at the highest levels, and he’s written some cartoons that place the evaluator into the place of the court jester in the king and queen’s hall, or whatever, right? In the old times [laughs], usually the court jester was maybe the only one that could get away with giving the king or the queen the bad news, and not losing their head, right? And so in a modern-day context somebody who’s supporting the leadership folks with the decision-making authority, who plays that role is an evaluator. Maybe doing it with some data and some jokes, and not just all jokes, but [laughs] having a sense of humor actually is very important, probably, in some of those conversations that I’ve had. At least, I've forced out some tries at a sense of humor.

So moving forward to your question, then, working with leaders who have a concept but maybe not a full-fledged theory about what it is they want to do, and how they want to do it, and even maybe why. I think folks usually get to the point where there’s maybe a mission statement, a vision, they understand the context, the lay of the land in the area that they work in, for example, an ed tech company. And these things probably are written down, although sometimes folks move so fast that it’s just kind of assumed knowledge. Folks assume that the whole team believes the same thing, and that everybody feels the same way; that they’re in this and doing this for the same reason. Sometimes just the act of writing some of these things down helps to indicate areas where there might not be 100 percent agreement about why folks are doing what they’re doing.

But when you get into some actual formal theory design – and in an evaluator’s role you might actually facilitate how some of this might happen – you’re really trying to get folks to understand globally what in communities, within education or ed tech, in that particular industry, or in the K-12 space or elementary age space, or whatever, what are folks generally trying to change? And towards what ends, right? And what are the steps that folks feel like they’re –? You know, if these things would happen, then it would lead to this better outcome. And if that thing happened, one more step down the line things would be a little better this way. And if that happened, then – you know, a series of if-then statements. On a theory of change level, it’s understanding the context of what everybody’s trying to do, right? And writing that down and understanding that.

Because then the theory of action, which is another thing that you mentioned, is something that you would do internally for your own product, for your own program on a nonprofit level. This is the space we’ve carved out where we contribute to that overall theory of change. Sometimes it’s really helpful to work backwards a little bit. So where is the finish line? How are we going to know that we’re going to be on track to getting to that outcome? What are some of the mileposts that we might know that we’ve passed? Or where can we anticipate a fork in the road where we’re going to have to decide whether we go right or left?

Documenting some of these things on your theory of action, which is then going to describe how you intend to carry out your given program or the dissemination of a new tool into the industry, or whatever. You carve out your own space, and you definitely draw consensus among those that you’re working with, collaborate on those types of things. At that point you’ve gotten really close. You’ve done 80 percent of the work to really setting up a good plan for how you’re going to track your success where an evaluation would come in later on. But those theory documents are really the start, right? And in many cases it’s just taking the time to sit around a virtual table, I guess, at this point, and articulate those things, debate the merits of certain ideas with your team, and get them documented.


Elana:
Yeah, in the organization world, in the startup world, really in any business I’ve been a part of, we focus a lot on goals. Smart goals, OKRs, KPIs, add in an acronym, or RACI, we do all of these things. But what I love about evaluation is that it puts the impact first. What are you ultimately trying to do? How do you specifically contribute to that impact? And that’s so important. It seems so much like common sense, but sometimes organizations – and I’ve been in quite a lot of them – that we just get caught up in the actions and don’t really connect it to the impact. And I love that that’s what you’re saying.

And for anyone that doesn’t know how to get started with the theory of action or theory of change, we’ll add some templates in the show notes that you can download. And you can go to leoniconsultinggroup.com/eight for these show notes, and we’ll add some templates for you to get started. But this episode is launching in January, which is a perfect time to rethink your plans for the entire year, and overlay impact – overlay what are you ultimately trying to achieve in terms of impact. Do you want to add anything to that, Jason, as people start planning for the year as well?


Jason:
No, I mean, it’s well said. And these theory documents can help serve as your roadmap too, right? If you only ever know that you’re trying to drive from the Bay Area to New Orleans, and your destination is New Orleans, but you haven’t taken the time to write down that eventually you’re going to need to turn from – what would it be? I-5 onto I-10? You could do it in one turn, I think, actually, to get all the way here. But if you haven’t denoted that that turn is going to be there, and this is going to be a place where we can see are we still on track or not, you could get to the end of next year, or the next development cycle, or whatever it is that you’re working on, and not realize that you missed something that had you only known to look for it, it would’ve been right there flashing you in the eyes. Where you maybe could’ve dipped another way, or made another choice that would’ve ended up being a more positive experience or leading to something better happening for students, or for teachers.


Elana:
Yeah. Agreed. It puts you in the driver’s seat, I think, when you think about impact. And impact can also bring teams together, focused, more mission-oriented. Everyone in ed tech got into ed tech because they do want – they care about students, they care about the learning experience, they care about the space, they care about educators. And I remember doing retreats where theory of change and theory of action were up there, and we got to brainstorm and be a part of it, and own that. So if any of you are thinking about doing retreats with your organizations or companies, this is also a really great time to bring everyone together and get them re-bought into the mission and the intended impact.


Jason:
Mm-hmm. Definitely. And I think if that is something that folks are planning on – you know, it’s interesting. I don’t know that the nonprofit space and the startup space are actually that different. Most nonprofits are very much startups. They’re young, they’re aspirational, they don’t have a lot of seed funding, necessarily, right? There are some larger nonprofits that are clearly more – you know, foundationally they go back a long time, and have revenue streams that are consistent from year to year, but actually a lot of nonprofits are in that space too. And for any startup or nonprofit who is starting to generate some of these ideas, and debate the merits of whether this theory may be practical or may be off base, the one thing I would encourage folks to do who are working in especially the public pre-K-12 space – but I would assume – I don’t work as much in the higher education industry or with folks in that area, but I would assume it would be the same there. You probably want some of those folks on your team at the theory stage, not just the we built the thing, we want you to try it out stage, right?

So including educators, active educators too, not folks maybe who got out of teaching and joined the tech world, or got out of teaching and joined the nonprofit world, but to the extent that you can engage active educators who know what it’s like to be an educator right now, which is unlike any time, clearly. Everybody on this, who’s listening to this podcast will understand that things are different than it’s ever been for folks right now in the classroom. But, yeah, educators have a lot that they can offer in helping to understand potential breaking points and things like that in your theory, and helping to point out areas where there might be significant improvement, before you get too far down the line and you’ve decided what your outcomes and things are going to be, but you go back and there’s one fundamental problem. Whatever it is you’ve created isn’t something teachers want to use in their classroom. Like you missed on the theory. You know.


Elana:
Agreed. And if you’ve listened to any of our other podcast episodes, we talk about bringing educator voice in, working alongside them, but also paying them for their expertise.


Jason:
Yes.


Elana:
So startups have limited budget, you do not skimp on educators. They are experts. And one of the projects Jason and I worked hand-in-hand with is that the educators were called subject matter experts, SMEs. We brought them in, they had a pivotal role in designing the product, but also testing it and breaking it, and giving that feedback without any emotion. It’s so important. Jason, when we think about ed tech startups you said they’re a lot like nonprofits. They have limited budget, limited time, capacity, all of these things. We talked a little bit how they can get started in evaluation where they can start looking at frameworks, like theories of action, theories of change, they can throw those into some of their strategic planning. How do they prioritize the things within evaluation? Are there little things we could tell them, if they can’t hire someone like you, how do they get started in really knowing their product’s making an impact?


Jason:
Yeah.


Elana:
Just like focus groups, or surveys, or how can they get started?


Jason:
Yeah, I mean, it’s – it’s kind of like anything else, right? The first step is the hardest. The first step out of the door sometimes is the hardest. But … and some of this isn’t really that different, maybe, especially in the ed tech world where you mentioned the KPIs and things like that are very commonplace in the workplace. There is a culture already of using data, and trying to make data-informed decisions, or at least having the data available on a spreadsheet while you go and make a decision because it’s what you wanted to do anyway. But there is definitely a culture there of at least data accumulation. There’s databases and stuff, especially for ed tech startups, right? Whatever it is you might be developing, there’s actually a lot of data being collected as it’s being used, and things like that. So folks are kind of awash in data, but the data doesn’t actually make the evaluation, or something, right? It’s kind of just there, it’s a metric, it’s being measured, but all those theory parts, in the background parts, evaluation wise, that we’ve talked about so far.

Actually having a plan for evaluation, what questions are most important for you to answer? Who is the person or people who can help you understand the answer to that question best? When is the best time to ask them? Like doing those types of things. Most folks haven’t probably taken the steps back to do those types of things. So what I would suggest is there is a website called BetterEvaluation, and so that would be maybe a quick Google. The Kellogg Foundation and a couple of other of the larger family foundations in the United States have some pretty significant evaluation resources. That would be a place to see, you know, on their giving websites, or whatever, right? There’s a lot of resources available for folks, because they’re trying to – so in the case of the small nonprofits, they also don’t have an evaluator fulltime on staff. They don’t have a budget to hire a consultant, which is what I am. And so it ends up being the team that does it themselves, which is actually a great way to learn by doing.

So those resources would be good places for folks to go to, to get started. It’s probably helpful to have somebody on your team, no matter how small, and no matter how small the amount of hours or the scope that you can have them devote to this, because we realize people are already busy with what they do 9 to 5 anyway. But having somebody whose responsibility is to be the evaluation voice in the room, to bring the evaluation-type questions to the group, to help slow folks down, and have them write things down. Having somebody have that role, I think, has been helpful in the work that I’ve done both with smaller nonprofits, but also with folks in the ed tech area as well.

And sometimes that’s the hardest part, probably, is figuring out who that would be, and how to carve out their roles and responsibilities so that you can give them a few hours a week, or five hours a week or something, just to get started, so that they can immerse themselves in some of the information that’s out there. There is a – like with everything, there’s an association, there’s a national association for this industry of evaluation. It’s the American Evaluation Association, AEA. And there are nonstop conferences, a lot of – everything’s of course, like with everything else, moved virtual so there’s a lot of little online webinar types of opportunities, and coaching arrangements, and things like that, that folks can get involved in through the professional organization as well.

And then there’s local chapters; we have one down here, Gulf Coast Evaluation. So there is a local branch in California as well, and in most regions of the United States. So getting involved with folks who are doing this a lot, and being a part of their meetings, and being a part of their conversations, hearing what they’re doing, would be good. Again, if somebody can devote just a small percentage of their time every week or some time every month to be able to be a part of those things I think would really help folks.


Elana:
Yeah, and I’m thinking about structures of organizations I’ve worked with in education, and more often than not it’s under the growth area. And growth is kind of an ambiguous term, but in a lot of ed tech we have a head a growth. And what I’ve seen is sometimes when you want accountability for impact, they say head of growth and impact. And that might mean that they’re in charge of marketing, sales, but also have accountability in evaluation of some sort, so that efficacy point of it. But what you said about accountability is really important because it speaks wonders to your organization’s commitment to impact, and not just revenue too, when I see that. And I don’t want it to be jargon for the sake of it, of everyone’s got a head of impact now. Make sure that you’re doing the things Jason is saying, and devoting actual hours, and aligning it with your KPIs or OKRs, or whatever you want to do within an evaluation context. So that’s important.

Jason, I don’t want to forget the educators that are listening, and they’re probably slightly tuning out because we’re talking about education organizations, but some of them find it fascinating. But this question is just for them. When educators are looking at evaluating products – so these might be teachers on the ground, maybe instructional coaches – so educators within classroom settings, in district settings, but then also education administrators that are looking to purchase products. They are getting pummeled by emails, by calls, saying that this product will do x, and this, and teacher retention, and all these big things they’re trying to connect it to. How do they begin to evaluate things and know, I look at this study, that this is not BS, that it truly is solid in terms of its research? I know that’s a big question, but are there some red flags that you can point out?


Jason:
They are definitely being just totally scattershot torpedoed with this information. Those directors of growth that you talked about are doing an excellent job in their marketing. And so these things are flooding into the inboxes of educators who already have fulltime jobs. And so there is usually, not within a building, sometimes maybe at a district level, somebody has some role to do maybe subject-specific evaluation of supplemental curricula or online tools, or subscriptions or things like that. But a lot of times it would be because of an individual educator’s interest in a particular product or subscription or something. I think that’s what you’re referring to. And I think for those educators, just like it would be for the folks who might be making that tool, and like the tech space, starting at the end isn’t always a – is still a good idea, right?

What is it that you hope to have be different in your classroom? Either for you, to save you time, or to help you be able to improve the effectiveness of whatever that you’re doing for all of the students in your classroom, including maybe students with disabilities or English language learners. Or for the students in your classroom, if that’s the target. I’m really trying to increase engagement, or I’m really hoping that this helps with time-on-task, or something like that, right? So I think educators need to start with the end in mind, and then knowing what it is that you’re trying to do, you can probably send a lot of that information that’s coming your way over email to the trash, because it isn’t designed to help you do that thing, right? So now you’ve got a smaller group.

Now, the issue is, with all of these products and things, folks are doing internal work. It’s sometimes hard to tell where maybe an internal evaluation or internal research ends and the marketing starts. And clearly there’s a large amount of crossover, I think, with what’s going on. And so I think being – folks need to be technically, technologically literate. Folks need to be maybe just evaluation literate a little bit as well, and look for warning signs. Do statistics that might be cited, do they seem legitimate? Do they seem plausible? Does it seem too good to be true? Where was the study done? So the methods. Is it something that folks even are willing to publish, right? In the information that you have, do you have information about which classrooms or which students were benefiting from whatever this different thing was, the intervention or whatever? Can you see those? Was it a large sample or was it just five people?

It’s not that it’s a problem if it’s five people either, just the clarity around whether this is a widespread preponderance of evidence or whether it is more anecdotal at this point. Educators still may want to try it, right? If it’s really a good fit, and it seems like a good idea. But I think being able to understand the methods behind what’s been done. And also, I would imagine – and I’m not an educator myself, but I would imagine you also get emails about very new tools, and there’s not much behind them. Right? There hasn’t really been much study of impacts and things like that. And in that case I think it clearly still might be a great idea to be maybe part of a pilot program or something where you’re one of the ones helping to collect the data to really perfect and fine-tune this tool. But knowing that going in would be really important, right? Versus a teacher who may be thinking they’re using the same tool but that it’s been tested, valid, reliable, gold standard type research, there was a control group and an intervention group, and that these things have happened, and that there’s proof that it worked.


Elana:
Well, thank you, Jason, for your time. I know we could talk about this a lot, and we didn’t even get to the heart of your work with inequities. And we can get into even more evaluation methods and strategies down the road, so maybe we’ll have you back next year. But for all of you listening, I want this to be your call, and think about your organization, and think – can you map what you’re doing, all the things you’re doing in terms of your planning for 2022, to your actual intended impact? Does your entire organization know that? Do you have something mapped that they can reference?

And do you know how to evaluate certain things, like Jason is talking about, getting up to speed with certain lingos, methodologies, and be able to formulate how do we begin to measure this, even on a lightweight scale? So ask yourself those questions. I think this is a good wakeup call, and it’s a great time of year as you start thinking about you have the entire year to make a difference, and map what you’re doing to actual intended output and impact in the industry of education. And I can’t imagine a better industry to make an impact right now. So, Jason, I just want to thank you for your time.


Jason:
Of course. Pleasure.


Elana:
You read a lot, you do a lot out there. I am wondering if beyond this talk of inspiration, what keeps you going? What keeps you inspired when you’re feeling like you need a little pick-me-up? So we ask this of all of our guests, and maybe it’s going for a run, because I know you like to run. Maybe it’s some books or some podcasts. Are there some things that inspire you in your day-to-day?


Jason:
You can listen to a podcast while you run, which is nice. Reading a book while running I haven’t had much success with so far. But honestly, all those things help, but I could still just do any job. Right? Of course I would need to acquire the job – I’m not saying anybody would be willing to hire me. But the reason I feel like I can continue to do the specific role that I might play in the whole universe that is public pre-K-12, the provision of a free and fair education space for all students, is that there are millions of teachers out there just giving of themselves far more than any one should ever ask, every day. Getting hardly any chance at all to recharge their batteries. And they need to have the tools that – the very best tools they could possibly have, they need to have access to them to use with their students, so that they can go home at the end of the day feeling like they made the biggest difference in the lives of those students that they could.

And I’m able to play a small role in that, and my job is so much easier than having to do any of that, right? I’m so privileged to be able to have this role. So remembering who I’m doing the work for as I’m staring at the spreadsheet for the 17th hour in a row, or whatever, right? Really keeps me charged up and fired up about whatever my small role might be in trying to make things just a little bit better. And I definitely don’t make any mistakes about exactly how small that role is. I know exactly the scope of it. But at the same time, if there was no evaluation happening we would be awash with a whole bunch of great ideas, but would have no idea which ones actually were eventually going to make a difference for teachers and students.


Elana:
Wow, really well said. Jason, thank you so much for taking time. If people want to keep in touch with you, learn alongside you, what’s the best way for them to get a hold of you?


Jason:
Yeah, well, that’s the hard thing with me because I don’t use social media. So I am a member of the Terraluna Collaborative, which is a cooperative of evaluators. So I’m a member-owner in a cooperative. And our website is terralunacollaborative.com. There we’ve got blogs and some things that we are doing to produce information and make information available to folks. And I think probably because of the lack of social media that’s probably the best way to be able to stay tuned or stay connected with me on that website. And probably in your show notes and things you’re welcome to share my email as well.


Elana:
Great. Well, Jason, thank you again. And to all of our listeners, I want to thank you particularly for taking time out of your day, and knowing that wow, maybe I can learn something. Maybe I can do something different. And that’s really what this podcast is about. So whether it’s about marketing, community, even evaluation within it all, in this crazy place of education that we’re just trying to navigate with all the uncertainties, I thank you for your time, because I know that of everybody – there is no time. There is no time for educators, marketers, anyone in ed tech. So I appreciate all of you. I just heard some things around our analytics, and I want to give a shout-out to all of our listeners in Ghana and India. We have a contingent there, so thank you very much. And we have a lot of information, resources that Jason talked about. Everything will be in this episode’s show notes, and it will be live when you listen to this. So it’s at leoniconsultinggroup, that’s two g’s, leoniconsultinggroup.com/eight.

So all the templates, all the associations we’ll put in there, and Jason’s contact information. So thank you, everybody. We will see you all next time on All Things Marketing and Education. Take care. Thanks so much for listening to this week’s episode. If you liked what you heard and want to dive deeper, you can visit leoniconsultinggroup.com/podcasts for all show notes, links, and freebies mentioned in each episode. And we always love friends, so please connect with us on Twitter @leonigroup. If you enjoyed today’s show, go ahead and click the subscribe button to be the first one notified when our next episode is released. We’ll see you next week on All Things Marketing and Education.

[End of recorded material at 00:46:55]

 


Elana Leoni, Host

Elana Leoni has dedicated the majority of her career to improving K-12 education. Prior to founding LCG, she spent eight years leading the marketing and community strategy for the George Lucas Educational Foundation where she grew Edutopia’s social media presence exponentially to reach over 20 million education change-makers every month.

 

Jason Torres Altman, Guest

Jason Torres Altman (he/him) provides evaluation support to entities providing advocacy for, policy support for, and programming in K-12 public schools, extension services, and community development. Since his career began in 2002, in all project work he has emphasized elevating the voices of the most affected, and often least empowered stakeholders, as well as deeply considering nuance and context at the local level. His desire is that evaluative support can contribute to the changing of hearts and minds about what is important in local communities and the responsibility to serve local communities in the way that they choose to be served. Jason is a published professional with more than 40 academic papers, reports, and journal articles, and more than 80 academic presentations to his credit.


About All Things Marketing and Education

What if marketing was judged solely by the level of value it brings to its audience? Welcome to All Things Marketing and Education, a podcast that lives at the intersection of marketing and you guessed it, education. Each week, Elana Leoni, CEO of Leoni Consulting Group, highlights innovative social media marketing, community-building, and content marketing strategies that can significantly increase brand awareness, engagement, and revenue.


Rate, Like, and Subscribe

Let us know what you thought about this episode by rating and reviewing our podcast. Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you loved most about the episode! Also, if you haven’t done so already, subscribe to the podcast to be notified when we have more content to share with you.