CartoDB – New York City Community Gardens

Interactive Maps

New York City Community Gardens by County
New York City Community Gardens and Subways

I created two interactive maps showing the location of New York City Community Gardens and Subways. The first map shows the gardens within the borough polygon and the second one shows gardens and subway stations. When the user clicks on a feature, information about the garden appears in an infobox, including the garden name, address, borough and the community board and council districts in which is it located. The subway infobox contains the subway line, the station name and entrance location, and the MTA website url of the selected station.

About CartoDB

CartoDB is an open-source cloud-based mapping application that allows users to analyze and create visualizations of uploaded datasets. According to the CartoDB website, the tool allows you to:

  • Upload, visualize, and manage your data using the The CartoDB dashboard.
  • Quickly create and customize maps that you can embed or share via public URL using the map embedding tool.
  • Analyze, and integrate data you store on CartoDB into your applications using the SQL API.
  • For more advance integrations of CartoDB maps on your website or application, use CartoDB.js

CartoDB uses a proprietary styling language CartoCSS and SQL query via PostGIS to create customizable visualizations. Completed visualizations can be exported, downloaded or embedded on a web page.

Materials

I started with a list of NYC Greenthumb Community Gardens that I obtained from the NYC Open Data portal and the NYC Subways, which was a sample table provided by CartoDB. I loaded these two data tables to CartoDB to process and create the visualizations.

CartoDB Website:

http://www.cartodb.com

Data:

NYC Greenthumb Community Gardens:

https://data.cityofnewyork.us/Environment/NYC-Greenthumb-Community-Gardens/ajxm-kzmj

NYC Subways:

http://cartodb.s3.amazonaws.com/static/nyc_subway_entrance.zip

Method

To create my visualizations in CartoDB, I began by visiting cartodb.com and creating an account. The site where your data appears will be located at http://username.cartodb.com.

Next, click ìCreate your first tableí or drag a shape file to the file drop window. You can import files from your local drive, Google Drive, Dropbox or via URL. CartoDB also has a number of sample datasets that you can start with.

Once the table is imported, every row is given an ID number.

Click georeference under a column you wish to georeference. This column must be a full address or it will not be able to compute a geocoded location. A helpful formula in Excel concatenated my Address (column H), City (column I) and State (column J) columns to provide a full address that can be georeferenced.

=CONCATENATE(H2, “, “,I2,”, “, J2)

Once you import your dataset, it creates a table. Give your table name a simple name. You will use it a lot in SQL queries. You can import additional tables to compare data from one to the other, or boundary files that draw shapes, such as borough boundaries and Community Districts.

Results/Discussion

My completed CartoDB visualization is located at:

http://cdb.io/17lJo7l

Map of NYC Greenthumb Community Gardens (green) and NYC Subway Entrances (orange):

greenthumb-garden-maps

Close up view showing layers. It is possible to toggle data layers on and off using the visible layers feature:

greenthumb-gardens-and-subways

There is a definitely a learning curve to CartoDB for someone who is not well versed in developing SQL queries in PostGIS. Understanding how to format your data properly before importing the file to CartoDB saves some headaches. I learned this when my attempt to georeference my address field, ended up plotting NYC gardens in several countries throughout the world. Clearly something wasnít reading correctly. I realized I needed a single column with the full address containing street address, city and state. The Excel concatenation formula described above fixed this problem

There were a number of these errors that I needed to correct manually, mostly addresses that were intersections rather than postal addresses. For example, one data point was plotted in Michigan that was supposed to be in The Bronx.

I found the correct address at the NY Restoration Project website and corrected the error by right clicking the misplaced datapoint and selecting Edit Data, indicated by the icon with three horizontal lines (see next image). This allows you to change the data in each cell. I entered the correct address into the form and clicked the button marked ìSave and Close.î

Even though I edited the cell to the correct address in The Bronx, the data point was still located in Michigan. I clicked ìEdit Geometryî indicated by the icon of a point surrounded by four arrows. This activated the point so I could drag it to the correct place on the map. When I was satisfied that the point was in the correct location, I clicked ìDone.î

greenthumb-map-selection-details

One of the fields in the Community Garden file included the community board district where each garden is located. I attempted to add another table with demographic data for community board districts, but had a lot of trouble. It would import the file headers but not the data.

I also attempted to import county files available using a file provided by CartoDB, but the shapes were inadequately drawn at the scale I needed. The boundaries did not fit the actual counties and some gardens were located outside their respective county polygons.

Next I imported the NYC Subways file also provided by CartoDB and this worked. I decided to see if I could map which gardens were located within a minimum distance from a subway. I found the following sample query in a post PostGIS manual at http://gisciencegroup.ucalgary.ca/engo500/texts/PostGISinAction.pdf:

SELECT c.city, b.bridge_nam, ST_Distance(c.the_geom, b.the_geom) As dist_ft
FROM sf.cities AS c CROSS JOIN sf.bridges As b;

I edited the query as follows to see if it would return the distance to the nearest subway station:

SELECT a.name, b.garden_name, ST_Distance(a.the_geom, b.the_geom) As dist_ft

FROM nyc_subway_entrance_export AS a CROSS JOIN nyc_greenthumb_community_gardens_fusion As b

This returned some unexpected results. Instead of the nearest subway it appeared to return the distance from the subway entrance at Central Park West & 96th St At Sw Corner. I would probably need more training in SQL and PostGIS to figure out how to edit the query to select the closest station.

Future Directions

I would like to keep working at finding the distance from Greenthumb gardens to their nearest subway station. It would also be interesting to map the gardens within their respective community board districts. Food justice is a big issue in New York City, which I have not addressed in this report. It would be interesting to find out how many gardens are located within a quarter mile of NYCHA housing or to map the gardens relative to supermarkets.

I did a GIS study of the area near the Navy Yard in Brooklyn for a real estate appraiser a few years ago, and the lack of nearby supermarkets was quite apparent.

navy-yard-map

This visualization shows that the closest community garden to the Farragut Houses, adjacent to the Navy Yard, is Bridge Plaza Court garden, which is inconveniently located across the interchange for the Brooklyn-Queens Expressway and the Manhattan Bridge off-ramp. The next closest is at PS 67 in the Walt Whitman Houses across the expressway from Navy Yard. Most grocery stores in this area were bodegas and smaller sandwich shops and delis, with the next closest markets that do not require crossing an expressway being higher end specialty markets in the more expensive neighborhood of Vinegar Hill. Mapping the distances to community gardens, greenmarkets and supermarkets would give an interesting picture of food justice in the area.

Tableau Software – Joseph Stiglitz’s Economic Research

Tableau Software, developed at Stanford University, allows users to import data and create a number of visualizations by dragging and dropping components onto a panel. Users can create any number of visualizations, develop interactive dashboards and publish them onto Tableau Public, a platform that allows you to share your visualizations or embed them into your own website and presentations. We tested Tableau Software in LIS 658 Information Visualization at Pratt Institute. Below is a report of my findings.

Materials

The materials required for this lab included a computer, Tableau desktop software, and a dataset to work with. Tableau Software was already available on the desktop computer of the Pratt Institute classroom where we performed the lab. Since the Nobel Prizes were going to be announced this week, I was interested in submissions from Nobel economics laureate, Joseph Stiglitz, in Columbia University’s Academic Commons repository.

Dataset: https://whysel.com/wp-content/uploads/2014/04/Stiglitz-AC-Downloads.xlsx

Tableau Software: http://www.tableausoftware.com/

Data Wrangler: http://vis.stanford.edu/wrangler/


Deliverables

Downloads by Content Type
Total Downloads by Item
Item Change in Position Over Time
Most Popular Item by Month

Method

  1. Select the Data.
    The first step is to find a dataset to work with. This can be something you have on your computer or something that you find on the Internet. Make sure that it is in a format that Tableau will accept. The following formats are accepted:

    • Tableau Data Extract
    • Microsoft Access
    • Microsoft Excel
    • Text File
    • Import from Workbook

Note that Tableau requires that the data be formatted in a normalized manner so that each row contains only one piece of data. I used a program called Data Wrangler to edit the raw data into a format that is readable by Tableau.

  1. Connect to Data.
    To import data into Tableau, Click “Connect to Data.” Select the data format (I chose Excel). Then, select the workbook that you would like to import.
  2. Create a View.
    In the left column, you will see all of the data as identified by Tableau as Dimensions and Measures. Dimensions that are identified as text have an “abc” icon while dates have a calendar icon and geographic locations will have a globe icon. Dimensions can also be discrete numerical fields, like order numbers or zip codes. Measures are continuous numerical fields and are identified as numbers with the # symbol or as geographic data (latitude/longitude) with the globe icon.Make sure that the data format is identified properly. If not, for example, a date may be identified as a label rather than a numerical field, you may need to return to the dataset to edit the data. If you need to convert a discrete field to a continuous field, right click it in the Dimensions pane and select Number under Data Type, then right click again and select “Convert to Continuous.”
  3. Select fields to analyze.
    Drag a Dimension or Measure to the Columns “shelf” at the top right of the screen, and another Dimension or Measure to the Rows shelf. Notice that when you drag a discrete field to a view, it adds category headings, while a continuous field adds a scale.
  4. Choose a Mark Type.
    The Mark palette allows you to set the type of graph display. On Automatic, Tableau will select the graph type that best fits the data you selected. You can also use the Show Me button to opens a palette and allows you to select a Mark type for your visualization. Types of displays include line and bar charts, tables, maps, pie charts, circle or shape graphs, Gantt charts and other displays. This palette shows hints for the number of dimensions and measures to use for each kind of Mark type.
  5. Add Mark Properties to the data.
    The Marks palette also contains a set of functions that allow you to change properties of the data, such as the Color or Size of the display components, add Text or additional Detail.In the same palette, Tooltip allows you to Filter or remove a selection or view underlying data. Tooltip also allows you to indicate what to display when a user rolls their cursor over a datapoint on your display. You create these features by dragging a dimension onto the marks.
  6. Create a Dashboard.
    A Dashboard is a collection of views displayed on one sheet. To create a dashboard, select Dashboard | New Dashboard. From the left column, select a view to add to the new dashboard and drag it into place in the right pane.The layout is customizable, depending on where on the pane you drag each view. A gray area will indicate where you can drag a view. You can also select from the menu the items you would like to show on the dashboard, such as titles, legends and captions. Additional options in the left column allow you to change the dashboard size, add text, images or web pages, etc.Each time you add a new dashboard, it creates a new tab at the bottom of the screen. Dashboards and Views are noted by either a table icon (for Dashboards) or a spreadsheet icon (for Views).
  7. Save Your Visualizations.
    To save your workbook, click File | Save from the menu. You can also publish your workbook to Tableau Public by clicking Server | Publish Workbook. This will prompt you to log in to the website. If you do not already have an account, you can create one. Type a name for the workbook and click Save.
  8. Share Your Visualizations.
    Tableau offers a number of ways to share your visualizations. After saving a visualization, the preview panel offers a link to the view in Tableau Public or to embed the view into your web page. When you embed the view, you can also find share features at the lower left hand of the page by clicking on the icons for Facebook, Twitter, Email or Links.

Discussion

I created several views of Joseph Stiglitz’s work in Academic Commons. These views included visualizations of his work by content type, number of completed downloads and downloads per month over a twelve-month period:

  • Circle Graph of the Content Type showing the number of downloads for each Content Typestiglitz-circle-graph
  • Graph of individual Titles, showing the number of downloads, colored by Content Typestiglitz-content-type
  • Bar chart of downloads by Content Type for each monthstiglitz-bar-chart
  • Circle Chart showing each item and its position for each month (Showing the data for an individual item)stiglitz-article-circle-chart
  • Line Chart indicating the change in position of Stiglitz’s most popular item over time (most popular article selected to highlight spikes in downloads)stiglitz-article-line-chart

Discussion

It took some trial and error to learn how the various features of Tableau worked. I initially had some trouble importing the dates properly. I had a field that included the month and year, containing data for a full year from October 2012 to September 2013. Tableau read this data as a month and date. For example, “December 2012” was formatted as “Dec-12” which Tableau read as “December 12, 2013.”  Once I corrected the data and re-imported the spreadsheet, it displayed properly in Tableau.

It was helpful to learn the difference between a data dimension and an attribute. When I initially dragged the title and date to the Rows I ended up with a lot of small multiples. When I changed the Title to an attribute of the number of downloads, I was able to show the number of downloads for each title on a single chart. I had to play around with the attributes and filters to remove null values and get the data to display on a single chart instead of small multiples. I also had to edit the colors of the Content Types as they were not displaying uniformly from one Workbook to the next. I had earlier changed the order of the Content Types displayed in the legend, which may have caused the problem.

One of the results of my analyses indicated that some of the articles in Joseph Stiglitz’s repository were duplicate entries. I noticed that the URL field in one particularly popular article had an asterisk instead of a URL and examined to the data to discover that indeed there were two copies of the same record, deposited at different times. I thought it was interesting that the software read the data each of these records as a single title and aggregated the downloads for each month into a single data point, because it meant that the visualization was providing a more accurate view of the data than the raw spreadsheet file.

Limitations of Tableau Software include the fact that it only works with the Windows operating system. I have a Mac System at home so I was not able to download and work further with my visualization. Also the cost is somewhat prohibitive at $999 per user for the personal edition and $1,999 for a professional version. Tableau offers a hosted version, Tableau Online, which is offered by subscription at $500 per user per year. Aside from these limitations, Tableau Software seems to be virtually limitless in the ways one can visualize data.

Future Directions

Tableau includes many features and functions that I did not use or did not have time to explore in depth. For example you can add a number of interactive features to dashboards to allow users to change the way Views are displayed by adding Actions and Filters. It would be nice to be able to click on a Content Type and have it explode into the individual documents, or to click on a bar chart showing downloads in January so that a separate column chart breaking that month out by Content Type. While exploring the Help documents from home, I also discovered the Pages feature, which would be another useful way to present information in a live visualization. There are many more features worth exploring.

Going back to my original data source, I think that Tableau visualizations would provide valuable insights to the Academic Commons team at Columbia University Libraries. We are always looking for ways to drive traffic to the repository. I would like to learn how to add a link to the URL so that when a user clicks on an item and sees the URL in the attribute box, they can click the URL to view the item in Academic Commons. I tried to do this with the Tooltip feature, by entering the HTML code for a link, but it displayed the code in the popup box and did not link to the repository. This could be a good way to highlight interesting features and explore document usage within a collection of materials, such as subjects, authors and departments and drive traffic to the site.

There were definite spikes in usage of article “Equilibrium in Competitive Insurance Markets” that occurred I November 2012 and April 2013. It would be interesting to study what event triggered these spikes. An analysis of all 11,000 items by author or department would also be interesting to see which areas could use more outreach by the data collection team.

The Academic Commons repository manager was particularly interested in the results of my analysis of duplicate records. Originally she was planning to simply delete one and add the Views and Download totals to the remaining record. When I showed her the analysis of downloads over a twelve-month period, it became clear that her plan would disrupt the data on the historical download rate for that article over time. This helped her to reconsider her plan for dealing with duplicate entries and she is now going back to the developer to work on a plan that won’t alter historical data. A similar analysis on other frequent depositors to look for duplicates would be a good idea.

Google Refine: NYC Public Bathroom Map

Many data analytics projects start with a big mess, including nonstandard inputs, miscalculations, typos and other human errors. In addition, there are times when the way data is collected is not ideal for the way the analyst wishes to interpret it. An example might be when dollar amounts are entered in thousands when billions might be more appropriate figures to work with. Google Refine offers a relatively simple way to clean up data and perform transformations on up to thousands of records all at once. We reviewed Google Refine using NYC Open Data to see this in process.

Materials

The materials required for this lab included a computer, Google Refine desktop software, and a dataset to work with. Google Refine was already available on the desktop computer of the Pratt Institute classroom where we performed the lab Originally we were going to use a library file from iTunes, but we were unable to load the file to Google Refine, so we copied data from NYC Open Data portal instead.

Dataset:

Map of Bathrooms
https://data.cityofnewyork.us/Housing-Development/Map-of-Bathrooms/swqh-s9ee

New York City Open Data Portal
https://nycopendata.socrata.com/

It was also very helpful to have a copy of the Google Refine documentation:
https://github.com/OpenRefine/OpenRefine/wiki/Documentation-For-Users

Method

  1. Get the Data.The first step is to find a dataset to work with. This can be something you have on your computer or something that you find on the Internet. Make sure that it is in a format that Google Refine will accept.These formats currently include the following for Google Refine, version 2.0:
    • TSV, CSV, or values separated by a custom separator you specify
    • Excel (.xls, xlsx)
    • XML, RDF as XML
    • JSON
    • Google Spreadsheets
    • RDF N3 triples
  2. Import the Data.Google Refine offers a few ways to import data into the system, including uploading a file or copying it into a text field. The file upload function accepts a number of formats including Excel, CSV, RDF, XML, and Google Spreadsheets, among others. You can reference a dataset via its URL, copy and paste the dataset from the Clipboard or select a dataset from Google Data. When you have selected the file or copied the data, click Next.refine-import
  3. Review the Data.Did it import correctly? Did the data separate into columns as it should? Are the column headings showing up as headings or as a record row? If you use a spreadsheet, a delimited file like CSV or a structured file such as RDF or XML, it should have imported correctly. If not, Google Refine offers a set of tools that can help you define separators, such as commas or tabs or a custom field, or combine columns if they were separated incorrectly. It also allows you to set the rows that you want to use as the column headings, if these are not defined in the original file.refine-review
  4. Create Project.When you are satisfied that the data imported properly, click on the “Create Project” button at the top right of the screen.refine-save-project
  5. Clean the Data.When the data is separated into columns, look for additional problems. Do there appear to be misspellings or unconventional entries? Are there numerical figures that appear to be in a nonstandard format? Make a note of these problems so you can address each of them in turn.Google Refine offers a number of tools for editing and transforming data. You can edit cells individually within the spreadsheet. To edit a single cell, hover your cursor over the cell you would like to change and click “Edit.” This will open a small window with a box holding data that you can edit. It also allows you to set a data type and either apply the data just to that cell or to all identical cells. Click the “Apply” button to confirm the changes.You can also make changes by selecting Text Facets, which allows you to view cells within a column that share a common value and perform a search/replace function on all cells sharing that value at one time. To do this, click the dropdown arrow at the column header and select Facet | Text facet (or any of the other facets: numeric, timeline, scatterplot, or custom). Review the results for items that could be changed or combined. For example, if there is an entry for “Astoria Boulevard” and another for “Astoria Blvd,” you may wish to combine them as a single value, “Astoria Boulevard.” The “Cluster” button will reveal any datapoints that could be combined in this way.

    refine-facet

  6. You can also Transform data in a column by using a user-defined expression, which operates similarly to formulas in a spreadsheet. Google Refine Expression Language, or GREL, was designed to resemble Javascript. The documentation website is particularly helpful with performing transformations as it reviews the various expressions you can use to make changes to the data.refine-transform

    In my NYC Bathrooms file, I noticed that in the BUILDINGTY column that most of the data is entered in all caps, but the “Recreation Center” is in title case. I can change this by first selecting a text facet on this column, and then performing a transformation on the results. First click the dropdown arrow on the column heading and select “Text Facet.”

    refine-edit-cells

    This will open a window in the left column. Look for the item marked “Recreation Center.”

    refine-filter

    From the results, click the dropdown arrow on the column heading and select “Edit Cells” then “Transform.”

    refine-text-facet

    To change the value of the cells from “Recreation Center” to “RECREATION CENTER” simply type “RECREATION CENTER” (with the quotes) into the expression box. A preview below the expression box will show you if your changes are reflected properly.

    refine-custom-transform

Results/Discussion

I opened Google Refine and attempted to import the iTunes XML file to the system without success. I tried both the file import function and the copy text field but got errors. I turned to the NYC Open Data portal and found a simple CSV file of bathrooms in public parks, which I imported using the “This Computer” link and it worked.

Next, I reviewed the data for items that required cleaning. I found a number of bathrooms at Junior High School playgrounds whose records were labeled “Jhs” in the Description field. I decided I would try transforming the instances of “Jhs” to capitalized form, “JHS.” To do this, I found a transform expression that allowed me to replace the first four characters “Jhs“ to “JHS” successfully using the following expression:

“JHS ” + value.substring(3)

refine-custom-text-transform

There were a few glitches that required me to edit some stray cells individually. For example, one record contained the string “Jhs ” which was not in the first four characters. I did not have enough class time to figure out how to exclude this record. I edited the cell by moving the string to the front of the phrase and then performing the transformation. I had to make a note to myself to correct it to the original placement after performing the transformation. There may be an expression that allows you to transform a string without identifying its location.

The following image shows the “DESCRIPTIO” column with the transformed cells and all mentions of “JHS” in capital letters:

refine-facet

Future Directions

Most of the problems that I encountered while using Google Refine may be related to my beginner status with the program. Google Refine’s documentation offers extensive information about the ways you can sort, filter and transform data, and numerous examples of expressions that you can use to edit and transform multiple records at once. With patience, a user may find learning Google Refine’s tools saves time in the long run, particularly when working with large datasets.

Timeline JS – NYC Community Gardens

A timeline is a great way to display of a series of events and related information and tell a story around those events. Verite’s TimelineJS is a simple and useful tool for creating an attractive and easy to use timeline visualization. For my project, I created a timeline of events surrounding the history of farm and community gardens in New York City.

Materials

I started with a list of events created in Google Docs. This dataset includes the date of the event, a headline-style title, description and the URL for an image to illustrate the event, as well as a caption. Where there is a website with additional information about one of the gardens includes, I linked either a portion of the description or the image itself to the site. Most of the information comes from high quality sources such as the websites of the featured gardens, the New York City Department of Parks & Recreation and newspaper articles from the Chronicling America database. The New York Botanical Garden Mertz Digital Library was particularly useful, as were historical maps from the New York Public Library Map Division. I used to Verite TimelineJS to process and create the narrative visualization. Dataset: https://docs.google.com/spreadsheet/pub?key=0Ai4jVfIRGeOldExvazRNenliZDNVTDZuNFJ3eHRvZ1E&output=html

Method

I used Verite TimelineJS to demonstrate a narrative visualization of a series of events.  Since I had been working on a research project of New York City community gardens during the summer term, I decided to use the timeline software to illustrate the development of both formal and informal gardens in New York City, beginning with the creation of Central Park in the 1850s to modern farm gardens. TimelineJS offers a three-step process for creating a timeline. To use it, visit http://timeline.verite.co/, click on “Make a Timeline” from the top navigation and follow the steps:

  1. Step One: Create a new SpreadsheetThis step requires a couple of mini steps. First, to use the TimelineJS template, click on the bold “Template” link. This opens the template in a new browser window. Click on the “Use this template” button to create a copy in Google Drive (you may need to log into your Google Account or create a new one if you do not have one.)Click on the title “Copy of TimelineJS Template” to enter a new, descriptive name for your file.Now enter your event data. Notice that some columns are required fields. You can rollover the header cells to see if it is required and in some cases to see examples of the type of data that can be included. The Text and Media Caption fields will accept HTML coding if you want to add formatting or links, but it is not required.
  2. Step Two: Publish to the WebWhen you have entered your data, click on “File” in the Google Docs menu and then click “Publish to the Web.” In the popup window click “Start Publishing.” It may take a few moments to complete this action, depending on how many events you have entered.When the spreadsheet is published, a link to the completed timeline will appear in the box at the bottom of the popup window. Copy the link so you can use it in the next step.If you mark the checkbox next to “Automatically republish when changes are made” then the timeline will reflect any subsequent changes to the spreadsheet.
  3. Step Three: Copy/paste URL into the generatorPaste the Google Doc link into the box at Step 3. You can use the More Options dropdown to change the font. Click the Preview button to see what your timeline will look like. You can try different fonts from the More Options until you see one you like best. The Embed Code box will reveal a set of code that you can use to put into the web page where you want the timeline to appear.

Depending on the type of system you are using to host the timeline, additional customization may be necessary. A custom website or hosted WordPress is ideal. The <iframe> code won’t work on a free WordPress account, but a hosted account will allow you to download a plugin to include this content. Also, some additional customization may be desired, in the form of edits to your site’s CSS, if you need to comply to an overall style in the final output.

Results/Discussion

My completed timeline is located on my website at: http://www.whysel.com/garden-timeline.php Serving data via Google Spreadsheets is a good choice for Verite TimelineJS. Storing the data in the cloud allows immediate access and Google’s auto update simplifies the process of creating a timeline. When you publish data on Google Spreadsheet you can decide whether to allow changes to automatically be reflected in the published document, or deselect this option and Republish manually. I really liked this feature, because I didn’t have to worry about manually publishing after every small edit to the spreadsheet. One the other hand, if I want to make a more substantial change, such as adding a new event, I can instruct Google to stop auto-publishing until I have completed the new event record.

What I liked less about TimelineJS was the limited number of options for customizing the timeline. They give you a list of font pairings and the ability to change the height and width of the timeline, but to edit the timeline colors and format further, you need to know how to edit your sites’ CSS file. I wasn’t particularly happy with the font choices, as few of them are familiar, standard fonts, so I needed to preview each set to see what they would look like. I didn’t like many of them, either.

Also, the <iframe> based output means that certain content management systems, like a free WordPress site, can’t display the timeline because they don’t support <iframe>. The site I worked on over the summer is hosted at WordPress.com, so I was not able to include it in that project, which was disappointing.

My New York City gardening story is heavily focused on the period from 1900 to 1920, which was a time period that I focused on during my summer research. TimelineJS makes this visually clear in that there is a thick grouping of events during those decades and a sparse listing of events after 1920. This may appear to suggest that not much was going on in NYC around community gardening and farming after that time. It also makes me curious about what kinds of events I could add to the timeline to fill in the story during those years. For example, when was key legislation written that encouraged the development of vacant properties into community uses? I found relevant laws, but would need to do further research to find out when and why they where established.

Alternatively, I could create a separate timeline discussing the events from the 1978 founding of Operation GreenThumb to the present, telling each individual story in a separate visualization. I could also use a timeline to focus in on the lifespan of a single plant, or broaden the narrative to a history of agrarian society at large. TimelineJS doesn’t tell you what you should put into your timeline and what to leave out. These are choices that the researcher needs to make to tell an interesting story or make a convincing argument. In the end, TimlineJS is one of many tools to create that narrative.

Update

At the time I wrote this report, I was not able to find a way to embed the timeline because WordPress free, hosted site would not display <iframe>. when I created this research post, I discovered that Knight Lab offers a plugin for Wordpress that allows you to embed <iframe> code. There are several plugins for embedding this type of code, but Knight Lab is a good one and it works fine in my site. You just need to be sure your site is hosted on yr own account rather than at WordPress.com. Visit www. for more information about hosting your own WordPress site.

Brooklyn Navy Yard – Admiral’s Row Evaluation

Collaborating with real estate appraisers, engineers and archaeologists, I created a market research report and map visualizations for a retail feasibility analysis of the historic Admiral’s Row properties in Brooklyn Navy Yard. Sadly, the existing structure, an historically unique, Civil War era timber house, was found to be structurally unsound and would need to be dismantled. We explored various alternative uses that would benefit the neighborhood and analyzed the market for retail and commercial uses.

Maps of retail activity in the subject neighborhood indicated a lack of almost every category of basic goods, such as food, personal care and general merchandise. These categories are typical retail expenditures for the PRIZM market segments defined by Nielsen for the area nearest the subject property. The following maps plot the location of grocery stores, pharmacies, full service restaurants and delis.

 Grocery StoresAdmirals-Row-Neighborhood-Groceries  PharmaciesAdmirals-Row-Neighborhood-Pharmacies
 Full Service RestaurantsAdmirals-Row-Neighborhood-Full-Restaurants---including-Fast-Food  DelisAdmirals-Row-Neighborhood-Delis

 The maps indicated a lack of similar facilities in the residential areas nearest the subject property, specifically the blocks north and west of the Brooklyn-Queens Expressway and south of York Street. Without these facilities nearby, residents in the Farragut houses and the M1-2 district above the expressway had to go outside their immediate area to fulfill their retail needs for basic supplies (food and, especially, pharmacy).

I later revisited the project in the context of community gardening and food justice.

navy-yard-map

This visualization shows that the closest community garden to the Farragut Houses, adjacent to the Navy Yard, is Bridge Plaza Court garden, which is inconveniently located across the interchange for the Brooklyn-Queens Expressway and the Manhattan Bridge off-ramp. The next closest is at PS 67 in the Walt Whitman Houses across the expressway from Navy Yard. Most grocery stores in this area were bodegas and smaller sandwich shops and delis, with the next closest markets that do not require crossing an expressway being higher end specialty markets in the more expensive neighborhood of Vinegar Hill. Mapping the distances to community gardens, greenmarkets and supermarkets would give an interesting picture of food justice in the area.

Social Tagging and Folksonomies in Museums

Folksonomies in Museums communicates how cultural institutions use public tagging to inform discovery of collection items. My co-authors on this project were Dana Hart and Kathleen Dowling.

I presented our poster: Folksonomies and Social Tagging in Museums at the 2013 Information Architecture Summit in Baltimore on April 5, 2013. This poster was also nominated for the Pratt SILS Student Showcase on May 10:

 

Folksonomies in Museums Poster Handout from Noreen Whysel

The companion presentation, Folksonomies in Museums and other recent presentations are available at Slideshare.

Bronx High School of Science Parents Association Website

Information Architecture, WordPress integration, competitive analysis, content management, governance documentation

The Bronx High School of Science Parents Association suffered from several familiar conditions of sites built by committees with varying priorities: mismatched branding between the website and other communication channels, including its weekly newsletter, Facebook page, auction materials, and other printed materials sent home in students’ backpacks, as well as the ever longer list of left navigation links and an over-reliance on link text that lacks a meaningful context or information scent. I led the site transition from a flat HTML site to WordPress which was an opportunity to correct and realign a more attractive, usable experience.

Screenshot of the original website
Original Website
Screenshot of the updated website
Updated Website Design

Method

I conducted a review of other specialized high schools in New York City and realized that most of the websites suffered from these same afflictions. Interviews with committee members and active PA parents led to an information architecture that balanced the need to find relevant information and the PA committees’ need to recruit donors and volunteers for their activities. I also created a news feed that integrates current postings with content on the site by placing categorized feeds on related pages.

At the end of my tenure, I trained a team to take on the task of managing the website, and created a policy document for future webmasters to follow to ensure consistency of messaging and brand across all PA activities.

I have enjoyed working on the website and on school sites in general. I was also on the PA web teams for Columbia Secondary School and Columbia Grammar and Preparatory School, both Drupal installations. The Columbia Secondary School project was particularly interesting as we were creating a back end process for integrating student profiles with a college application system.

Digital Humanities Skillshare

I developed a website for DH Skillshare, an online, open-access, knowledge resource for digital humanists for my final project for LIS 657: Digital Humanities. The goal of this website was to provide a set of written or recorded (video and audio) instructional posts covering tools that digital humanists might find of value, regardless of their field or institution. Students in my class created instructional posts as part of the course assignments. These posts do one or more of the following:

  1. teach beginners an important, useful, or interesting technology skill;
  2. demonstrate how software/tools can be used for a specific purpose;
  3. present a template/recipe for a particular type of DH study
  4. compare/review existing resources;
  5. explain the methodology behind a tool or technique, etc.

My final project paper describes my research and evaluation of existing skillshare platforms and websites, including simple email discussion lists and informational websites, wikis, content management systems, archival systems, code repositories and supercomputers. The About Us section of the completed DH Skillshare website communicates the core values of a digital humanities skillshare site and provides criteria for selecting a platform so others can create their own skillshare site.

Project URL: http://dhskillshare.prattsils.org (has been replaced by http://dh.prattsils.org)