Blog

The Abralin lectures

Love Linguistics in the time of cholera coronavirus. In a time where most institutions of most countries (yay Taiwan!) have closed during the Spring semester, the Brazilian Association of Linguistics, a.k.a. ABRALIN (Associação Brasileira de Linguística), have invited lots of linguists across the world for online lectures that are freely available. I thought it would be a good idea to document here the ones I have seen, and liked enough to share.

Data packages for current and future me

tl; dr I show why it is worthwile to put my Chinese-related datasets in packages and how I went about it. Introduction I don’t know if I’m very late to the party, but in this final sprint towards a finished dissertation, I keep finding myself juggling multiple datasets when using them in R. This usually is paired with a readr::read_csv() or related functions, but the drawback is that I have

Rbootcamp 2019

tl; dr Below you find what we did during the Rbootcamp for Lexical Semanticists. In between this paragraph and the contents, there is a bit of my own #Rstory. Warning, many R-puns. Intro Somewhere in the beginning of this semester, I got the request to teach the other people in my lab “the basics of R”, so that they could see what kind of benefits

Tidy collostructions

tl ; dr In this post I look at the family of collexeme analysis methods originated by Gries and Stefanowitsch. Since they use a lot of Base R, and love using vectors, there is a hurdle that needs to be conquered if you are used to the rectangles in tidy data. I first give an overview of what the method tries to do, and then at the end show the

Guanguan goes the Chinese Word Segmentation (II)

tl; dr This double blog is first about the opening line of the Book of Odes, and later about how to deal with Chinese word segmentation, and my current implementation of it. So if you’re only interested in the computational part, look at the next one. If, on the other hand, you want to know more about my views on the translation of guān guā

Guanguan goes the Chinese Word Segmentation (I)

tl; dr This double blog is first about the opening line of the Book of Odes, and later about how to deal with Chinese word segmentation, and my current implementation of it. So if you’re only interested in the computational part, look at the next one. If, on the other hand, you want to know more about my views on the translation of guān guā

Memorable moments of 2018

Hi, reader. Since I am doing research most of the time and somewhat struggling with the question of what I should let out of the bag and on this blog, my findings have still only made their ways to research conferences, some articles, and my notes, instead of ending up on this platform as well. So, as it has turned out to be difficult to keep this blog updated, a

The Rite of Spring

Aaaah, can you feel it, radiating from this picture? Spring was in the air. And also the in the title of this update. In this update I’ll write about two main events in my Taiwan life: finding out that I passed the intermediate Isbukun Bunun exam and the visit of my lovely classmates from Sinology, back in the day. Have fun reading! Passing the Bunun exam Those among you who

Big Hero City

In an earlier post, I have asked you guys why you are visiting this blog. If you feel like contributing to my informal study, please do so now by filling out this questionnaire. If you just want to read and watch pictures, what most of the answers indicated, just keep scrolling down! Because this time we’ve got Kaohsiung in store! Kaohsiung Yes, we are getting old cows out of the

Handling research (Academic workflow 2)

Last time we did this, I showed you what my PLN looked like about a year ago. I advise you to read that first before you continue reading below. In today’s update I will lay out how I generally go about the processing of research. It’s a kind of habit I’ve been cultivating for a few years now, and it generally does pay off, but it did not come out of nowhere and I might have forgotten some of the sources that influenced me.