Taking the TOEFL test

I recently took the TOEFL test and scored 113/120. Here are some notes for those who have to do the test as well.

My level of English is fairly good. I read mostly in English (novels, news, …) and also do quite a lot of writing. Although I speak English often, most of the time this is in the company of people like me, for whom English is a foreign language, so it’s definitely not perfect.

A first tip: no matter how good your English is, you’ll need to prepare for this test. You need to be familiar with the structure of test, and the question format, or else it’ll be very hard to complete the sections in time.

I actually didn’t spend any time on rehearsing grammar, spelling, pronunciation etc. because I didn’t have time for this. That’s okay – the TOEFL tests whether you can function within an academic environment, and small mistakes are not considered problematic. I wanted to improve my listening skills, so I tried watching DVD’s without subtitles.

You’ll need a guide to prepare for the test. I used the Official Guide to the TOEFL Test. (4th edition is the currently the latest, but I used the 3rd edition without problems.) I skimmed through, and I did some of the exercises. They are very similar to the ones on the actual test. I paid special attention to the speaking exercises. It’s important to practice those with a timer. For example: you’ll have 20 seconds to prepare an answer, and 40 seconds to speak. This is very short and I had to get used to this format. The first times I tried I found it very hard, but after a few tries things will improve.

I took the test in Barcelona, but I think the experience will be similar everywhere. There were about 8 people taking the test when I did it. Prepare for very strict procedure (metal detector scan, can’t bring your own paper, …).

The test has 4 parts:

  1. Reading: keep an eye on the clock here. I had 90 minutes for 4 texts, and I had to hurry at the end to complete everything. There are some tricky questions, so make sure you understood the question. Sometimes an answer seems obvious but turns out to be wrong. To my surprise I didn’t make any mistakes here.
  2. Listening: this was pretty easy. Do a couple of exercises from the book and you should be fine. The actors in the conversations all speak very clear, so you shouldn’t have a problem with understanding them. I took extensive notes while listening. I rarely had to use them when answering the questions, but I’m sure it helped me to understand the structure of the talks and dialogues. I had everything correct here.
  3. Speaking: as I mentioned before, you really need to train for this. The questions are similar to the ones in the book. Use a clock when exercising, because 40 or 60 seconds is very short to reply. Here I scored 26/30.
  4. Writing: my result was 27/30. When practicing, keep an eye on your word count. For example, you may be asked to write a 300 word essay, so it’s important to get a feeling for what this actually looks like. I had a little bit of time left for this segment, so you shouldn’t need to hurry up here.

The test takes about 4 hours to complete, with a break included. I received my score a week later.

Using OpenNI 2.2 Beta on OS X

Here’s how I go the samples working. First install OpenNI itself:

  1. Download OpenNI 2.2 Beta for OS X.
  2. Run sudo ./install.sh. You will not see any output. This is normal.

Next you’ll need a driver for the Kinect:

  1. Clone the OpenNI2-FreenectDriver repo.
  2. Run ./waf configure build. You should now have a build directory with libFreenectDriver.dylib in it.
  3. Copy that file into Redist/OpenNI2/Drivers/. If you want to try the samples then also copy the file into Samples/Bin/OpenNI2/Drivers.

To run an example:

  1. cd Samples/Bin
  2. ./ClosestPointViewer

To use the OpenNI in an XCode project, for example in combination with openFrameworks:

  1. Create a new project. (Use the project generator if you are working with oF.)
  2. Click the name of the project in the left sidebar and make sure ‘Project’ is selected in the main panel.
  3. Look for the setting ‘Header Search Paths’ and click to edit. Click the plus sign and add the path to ‘Include’ directory from OpenNI. For example, I installed OpenNI in /Applications/OpenNI-MacOSX-x64-2.2, so my path is /Applications/OpenNI-MacOSX-x64-2.2/Include.
  4. Next, copy libOpenNI2.dylib from the OpenNI Redist folder to the folder that will contain your binary (the compiled version of your project). If you are working with oF then this is the bin folder inside your project.
  5. Finally, add #include "OpenNI.h" add the start of your code. (In oF: this would be in testApp.h.)

Using OpenCV 2 with OS X – Hello World

Here’s how to install OpenCV on OS X. I’m using:

  • OS X 10.8.4
  • Xcode 4.6.3
  • OpenCV 2.4.5

I installed OpenCV using the Homebrew package manager, so install it if necessary.


brew tap homebrew/science
brew install opencv

If this doesn’t work, do a brew doctor and fix all problems it reports.

Open Xcode, and go to File > New > Project. Select Application under OS X and choose Command Line Tool.

opencv_helloworld_01_create_project

Give the project a name and make sure Type is set to C++.

opencv_helloworld_02_create_project

Right-click in the Project Navigator and select New Group. Name the group OpenCV.

Right-click on the OpenCV group we just created and select Add Files to … In the file dialog, hit / (forward slash), so you can enter the path we need: /usr/local/lib. Select the following libraries:

  • libopencv_core.dylib
  • libopencv_highgui.dylib

opencv_helloworld_03_add_libs

This is the bare minimum. You might need other files depending on what you’re doing with OpenCV. For example, if you’re doing feature detection, you’re going to need libopencv_features2d.dylib.

Before clicking Add, make sure that Copy items is off, and Add to targets is checked.

Now we need to tell Xcode where it can find the OpenCV headers. Click the name of the project in the Project Navigator. Next, click on the name of the project and then make sure the Build Settings tab is selected. Look for the Header Search Paths setting, and click the blank cell next to it to edit it. Click the plus sign in and add the path /usr/local/include.

opencv_helloworld_04_header_search_paths

The last step is to change the standard C++ library used by the compiler. When I used the Xcode default (libc++), I got Undefined symbols for architecture x86_64 errors. We can change this also in the Build Settings tab, under the setting C++ Standard Library. Set it to libstdc++.

opencv_helloworld_05_std_cpp_lib

Let’s try if it works. Go to the Project Navigator and click the main.cpp to edit it. Replace the contents with:


#include
#include

int main(){

// load an image
// make sure you change the path!
cv::Mat img = cv::imread("/Users/bert/Desktop/test.jpg");

// check if image was loaded
if (img.data) {
std::cout << "Image loaded" << std::endl; } else { std::cout << "Image not loaded" << std::endl; } // create a window and show the image cv::imshow("our image", img); // wait for a key press cv::waitKey(); }

When you click Run, this should display the image in a window. Hit any key to end the program.

ITP Camp Fellow

Quick update: I’m alive and well! I was given the opportunity to spend June at NYU’s ITP as a Camp Fellow.

It’s great to back in this place. I’m working on a tool for creating subjective maps, using openFrameworks. Here’s an early screenshot:

slitmap_alpha

Apart from this, I’m also teaching a series of openFrameworks workshops.

Did we ever look up?

We never look up is a photo blog. The photographer is a “mobile researcher from Finland” – that’s all we know about him or her. The black and white pictures show people interacting with the screens of their phones while being in public space. Some of them are sitting or standing; others are walking. Locations include sidewalks, squares, bars, shopping malls, public transports, busstops, etc. The people in the pictures seem to be glued to their screens. They are physically in public space, but it’s more accurate to say they are inside their own private world.

The author stated in an interview that he or she doesn’t want to criticize – the purpose is to document. The photo blog caused quite a lot of reactions. Many people see the smartphone trend as negative:

  • We don’t look at our surroundings anymore.
  • We don’t talk to each other anymore.
  • We seem to be addicted: we can’t stop.

Why do we have moral issues with these new technologies? Genevieve Bell is an anthropologist who’s the director of Intel Corporation’s Interaction and Experience Research. According to her new technology is seen as negative if it simultaneously changes our relation with space, time and other people:

1. Space. This is the case with smart phones: we are in our private bubble when using our phones in public.

I wrote before about how GPS changes the way we navigate the city: we follow the fastest route calculated by the software. Moving from A to B becomes very instrumental: we move our bodies to the destination. There is less chance for surprises and actual discoveries. We have less opportunity to build up a mental map of a city.

2. Time. Our internet connection in our smart phone means we’re always connected. The boundary between work and leisure becomes blurred. We can read email anytime now, even if we’re away from our computers. There’s a growing expectancy that email will be replied to quickly, not within days but rather hours.

Smartphones seem to accelerate our life. Think about making pictures for example. In the era of the analog camera, it took days before pictures could be shared with friends. Now this is an instant process. (See also Douglas Rushkoff’s Program or Be Programmed, especially chapter 1 on time.)

3. Other people. The pictures on the We never look up blog seem to suggest that we lost something there. Something has changed in our social relationships, but I’m not so sure that it’s as dramatic as some people suggest.

Let’s think about affordances: a smart phone is designed for solo experiences (small screen size, comes with earbuds, …) But so are books, newspapers and magazines. A friend who made a trip around the world told me he noticed the same unwritten social law in hostels everywhere: reading a book means “leave me alone”, while having a closed book in front of you on the table is a signal that you’re available for a chat.

Perhaps we’re nostalgic for an era that never existed? Hasn’t disconnecting from the world always been a part of life in the city? A necessity to make life amongst the crowds possible?

One important difference between smart phones and printed media: it’s almost impossible to guess what a smartphone user is engaged in – might be an email, a game, a book, … The design of printed matter betrays some its content, for example the cover of a book or the newspaper format. For my globetrotting friend, the title of a book might be a good conversation starter. Would we be less annoyed by the people around us that were glued to their smartphones if we had some indication of what they were doing? What if the back or side of our phone could for example show the name of article we were reading?

Smart Citizens in the Data Metropolis

What form will the new, hyperconnected flaneur take, now that our right to lose ourselves in the city or discover unexpected spots while looking for a late-night pharmacy is no longer taken for granted? Perhaps one possible role of cultural institutions will be to imagine new urban experiences that enrich physical space with a certain poetry, to return some serendipity to the street experience, or to help us resignify data or reencounter furtive space.

Quoted from Smart Citizens in the Data Metropolis by Mara Balestrini, on the CCCB LAB blog.

The effects of GPS and smart phones on our experience of the city

I’m thinking a lot about location, presence, distance, space, maps and related concepts lately.

Since I’m living in a city which is still relatively new, I’ve come to rely on the GPS map functionality of my smart phone. It’s amazing how quickly we get used to this and how it affects our experience of the city. Navigating the city like this turns it into generic space, each location interchangeable for the other. (In Barcelona, this is amplified by the regularity of the grid pattern of the Eixample.)

The process of getting familiar with a place becomes unnecessary. We are strangers and have just arrived here, yet we already know our way.

The absence of GPS not only forces us to get to know a place, but also its people, when we ask for directions.

Unique features of places fade into the background. For example: we pay less attention to street names. They become less important for identifying a location when we have a GPS and compass in our phones.

GPS also seems to affect the space between current location and destination. An algorithm sends us alongs the most efficient route. A conventional map still required us to understand what was between here and there. Routing applications have gotten so good that we don’t need to care anymore.