By the end of Week 1, it seemed like I was making a quick yet substantial progress. I mean, isn’t reaching the first milestone on Day 3 at least *something*?

As it turned out on Monday of Week 2: not really.

Well, I guess yes, I did make a progress. That is, if you count convincing myself of the rationale of, and coming to terms with, the decision to ditch the entire platform I was planning to spend the entire summer on as a “progress”; if you count discarding the code base that showed the promising result (shown on the previous post) based on the same rationale as a “progress”; if you count learning how to get back up after dwelling in the disappointment from the setback for a few days as a “progress.”

 

The matrix-free agent-based model was shown on Monday to be a no-go.

 

Essentially, it ran too much more slowly than its matrix-utilizing counterpart to justify the advantage of being able to modularize the code as I explained in the URG proposal. Although it wouldn’t have surprised or bothered me so much if it ran just a bit more slowly, but it ran 70x more slowly than the other NetLogo code that relied heavily on matrices.

So the question became: what’s the point of using NetLogo if I have to use matrices anyway?

In fact, MATLAB was so much better in handling matrices than NetLogo was. For example, changing the size of a matrix could be done with one line of code in MATLAB, but in NetLogo it involved 1) converting a matrix into a list of lists, 2) adding/removing elements to/from the list of lists, and 3) converting the list of lists back into a matrix.

 

And that’s how I came to ditch NetLogo and agent-based modeling approach for this project.

 

On a brighter note, coming back to the summer living place in the evening and seeing this cute little bunny on the grass was quite a joy.

On a brighter note, coming back to the summer living place in the evening and seeing this cute little bunny on the grass was quite a joy.

 

My next setback came at the end of this second week, when I attempted to speed up the current MATLAB code by incorporating the CUDA-integration features provided in MATLAB’s Parallel Computing Toolbox.

 

I thought letting the graphics processing units (GPUs) handle all the matrix multiplications done throughout the simulation would speed things up a lot. Prof. Riecke and I were hoping for at least 10x speedup compared to a pure CPU-based simulation.

The result? Little/no speedup.

Why was that? Upon looking further into it, I found out that the CPU is much faster on a per-core basis (~100x faster, according to the lecture note from the class on CUDA programming I took this past winter), which makes the problem size of this project (a few hundreds of neurons) too small to make the usage of GPU bring such a huge speedup. Furthermore, a good chunk of the matrix operations done in the MATLAB code were element-by-element, which also decreased the amount of opportunity for a huge speedup.

Welp.

 

But after all, this is what a real research was supposed to be about. There’s nothing worth doing that is easy. They say it’s something to be concerned about, not something to rejoice over, when everything goes as expected, with zero major obstacles. This is exactly what they mean when they say that. And after all, I’m getting a good dosage of what a REAL research life is about — what I’ll be experiencing in the graduate school on a daily basis.

 

What else can I ask for?