As a member of the CLIR/DLF Post-Doctoral Fellows program, I attended this fall’s CNI meeting with the rest of the cohort of 1st year CLIR postdocs — a special treat, since attendance at CNI is normally restricted to two people from each of the member institutions, and two people from McMaster were already going.
The CNI meeting is a hybrid of the industry and academic conference styles. This means that in every session I saw, people actually spoke, rather than read papers; and that some sessions were hour-long talks given by one person. The role of the speakers varied: some were industry professionals, others were librarians or alt-ac staff, a few were professors.
It was an interesting conference, and one that I suspect I would benefit from attending again, if I can make it work, because there do seem to be trends that ebb and flow. CNI caters to a particular hybrid community — one which in some ways, I’ve been part of for a few years now, and in other ways, I’m new to, as a recently-minted PhD, and as someone newly employed within a library to do digital scholarship work and development. In short: I’m still learning how to assess the buzz, and judge what might be short-term trend talk, vs. recurring concerns.
With that caveat, here’s what I noticed/found myself discussing with others at the conference, and what I expect I’ll be thinking about in the next several months.
1) Linked open data.
It seemed like there was one session focusing on LOD in almost every time slot. Has LOD always been such a hot topic, I wonder, or are libraries/academic info-professionals getting into it more recently, much as I have? The number of sessions devoted to LOD is a measure of interest in it, rather than clear strategies for working with it/decisions about what to do (but this is a perennial challenge with LOD, or it has been so far — this presentation from Robert Sanderson from the JDH is a good run-down of some of the problems.
I’m still enthusiastic about the potential of LOD for my project, and for digital humanities projects that involve complex, heterogeneous data — but I admit that what I saw (and read via Twitter, for panels I didn’t attend), made me understand Sanderson’s lack of confidence in it as a platform. There seemed to be some instances where people wanted to be extraordinarily precise with their data, and others where automation was going to happen on such a vast scale that errors were inevitable, and it seemed that there would be little effort to even try and deal with them. (I keep hoping that I thoroughly misunderstood what I heard at that particular session).
This isn’t the time or place to fall down the LOD rabbit hole, but listening made it clear to me that I need to make my own adventures and process with semantic web stuff more transparent. There’s great potential for LOD in digital scholarship, if we can work through the associated challenges.
2) How people work together, and/or challenges involved in supporting people working together.
Some people work mostly independently, others work in teams; but a lot (all?) of the endeavours that info-professionals are currently involved in depend on the success of multiple teams being able to communicate and work together effectively. I typed “teams”, but it occurs to me that in some ways, that’s an oversimplification, because teamwork is very much an industry idea, and I regularly see academics (faculty and staff) bristling at the idea of being on teams.
Data is being created/curated/managed by one group of people by use for another group of people (and yes, these two groups partially overlap), but it’s not always clear how much communication happens between the two groups, how that communication fits into the workflow, or how much time people spend on trying to make the communication effective, rather than potentially alienating. Inna Kouper, of the University of Indiana, gave a workshop on Data Curation for the CLIR cohort, and brought up a problem that I’ve heard mentioned multiple times lately: bad surveys. To wit, people who send out surveys that take up other people’s time — only the questions in the surveys turn out to be the wrong ones, leading to oh, frak, another survey… Making a survey is easy. Making a really good survey isn’t quite as easy.
Here’s a different example of digital scholarship-related communication, and the way in which it can be tricky: today I met with several members of McMaster’s History Department to informally chat about digital humanities; what they’re doing, what the Sherman Centre is doing, etc. And at one point, someone asked what the Sherman Centre’s definition of digital scholarship is, and after thinking about it, for a moment, I gave an honest answer: that we don’t have a closed definition, and that this, in fact, was far better than having a fully-defined and restricted idea of it. I stand by this, because what I know that my colleagues and I want is for our particular version of digital scholarship to emerge from what we do with other members of the McMaster community. That’s far better than either an idea of DS that simply goes out from the Sherman Centre, or one that simply comes into it from the faculty. This jointly-generated idea will be based on the details of faculty members’ expertise, and on Sherman Centre staff members’ expertise — or it ought to be. But for complicated reasons, working that out can feel a bit daunting — both for faculty (not the History Dept. specifically) and for Sherman Centre staff.
At CNI, this sort of situation seemed like the undertone running through many, if not all, of the sessions I attended.
3) Infrastructure, and how much people need to know about it.
In the opening plenary session (“A Conversation on the Changing Landscape of Information Systems in Higher Education”, one of the speakers referred to some of the constraints of enterprise systems, prompting me to wonder on Twitter how many academics know what enterprise systems are, and whether it mattered if they didn’t know. One of the exchanges that followed points back to the “how do people work together” theme; the other response put a different spin on it: how much do academics need to know about the tools that they use? “Tools” in this sense doesn’t just mean fancy digital apps beloved by self-identified DHers — one major set of tools are library catalogs. I would argue that at this point, library catalogs are almost ubiquitous computing — we think of them as a tool that we are expert at using, and we don’t think of them as a tool that might be subject to sudden change or fragility.
In another session, debriefing attendees on the Executive Roundtable on Digital Humanities (note: the CNI has just released a report on digital scholarship centres), CNI Director Clifford Lynch mentioned that there weren’t as many calls for infrastructure from humanists. On Twitter, attendees wondered: how much do faculty need to know about infrastructure? Would such knowledge lead to a situation where everyone ended up arguing about the right sort of infrastructure, and nothing got done? One of my CLIR colleagues pointed out that many humanists are probably used to jury-rigging stuff, rather than trying to intervene in the system, and that rings true for me. But when Lynch said a few minutes later that many faculty “didn’t know how to think through constructing a project in a particular area,” I saw it as an indication that faculty need to know more about infrastructure than they generally do right now.
But what kind of knowledge about infrastructure do faculty (and staff) need? It’s too easy to jump to the conclusion that they need to be able to duplicate the knowledge of industry people, which leads to those sticky debates about whether digital humanists need to learn how to code.
I don’t have a good answer to that question tonight.
In all these discussions, infrastructure tended to mean funding, or the presence of faculty/staff/DScentres — so, also the result of funding. In hindsight, though everyone seemed to be talking about infrastructure, or about working together, few sessions were talking directly about the two together, i.e. social infrastructure. The exception in sessions I attended was Adam Hyde’s talk about Project Tahi, a workflow application similar to Trello and Kanban, but with features developed specifically for academic journal editing. I’m still thinking about an assertion that Hyde made early on, which I’ll paraphrase: that collaboration isn’t just about divided labor, it’s about people working on the same thing at the same time. How often do today’s collaborations — especially collaborations in academia — achieve that?