Emily Hamilton

Emily Hamilton

Smart city data and political opportunism

The term “smart cities” encompasses the interaction of the Internet of Things, the urban environment, and city dwellers. While these innovations have facilitated some very successful new services, smart cities have important limitations in the public sphere. Smart city technology includes city services like bike share systems and pedestrian, transit, and driving directions conveniently accessible on smart phones. Beyond these services, however, some city leaders have plans to revolutionize city planning through smart city data. Their ideas include facilitating evacuation during emergencies, reducing traffic congestion, and lowering crime. The proliferation of data arising from smart city tools has fostered many public policy experiments. In his book Smart Cities, Anthony Townsend covers how big data has opened up new opportunities for government service provision and government control. Townsend cites the example of Rio de Janeiro’s implementation of IBM technology as an example of the utopian thinking that smart city tools have inspired: What began as a tool to predict rain and manage flood response morphed into a high-precision control panel for the entire city. […] Just how effective Rio’s operations center will be in taming the wild metropolis remains to be seen. Urban security experts with whom I have spoken are skeptical that it will dramatically improve the effectiveness of law enforcement, and technology experts point out that beyond the video streams there has been little investment in new sensor infrastructure to feed real-time data to the center. Townsend points out that a key motivation of city leaders in adopting smart city technology has been creating the impression of being high tech rather than actually improving city services. He quotes the IBM team that implemented Rio’s weather forecasting and surveillance system saying, “That was a big surprise to us. We thought that this was going to be about ROI models, and the efficiency that we can produce. To […]

Shell Games in NIMBYism

Yesterday the Cato Institute hosted an event featuring William Fischel’s discussion of his new book Zoning Rules! with commentary by Mark Calabria, Matt Yglesias, and Robert Dietz. Fischel explained his theory that zoning was an effective tool for minimizing nuisances between land uses through the 1970s. Until that time, he asserts that city planners did a good job of separating incompatible land uses, such as industrial and residential uses, benefiting residents and protecting home values in the process. His theory is that in the 1970s, inflation increased the value of homeownership relative to cash savings, leading homeowners to increasingly view their houses as investments. At the same time, the rise of environmentalism provided the policy justification for using zoning as a tool to limit the growth of housing supply. According to his theory, homeowners then began using their power to lobby for downzoning to protect their large, undiversified asset, and valued minimizing any potential downside risk in their home value. In his discussion of Fischel’s book, Matt Yglesias pointed out that today, NIMBYism has gone far beyond keeping out polluting land uses and low-income neighbors. For example, some residents in San Francisco’s Mission District are supporting a moratorium on luxury housing development, and some Brooklyn residents are fighting to keep vacant industrial properties in place on the waterfront. Permitting high-end residential development in these neighborhoods would be more likely to raise than lower nearby homeowners’ property values. This opposition to development is at odds with Euclidean zoning in these neighborhoods where expensive housing now abuts abandoned warehouses. It’s also demonstrates that NIMBYs are not motivated by narrow profit interests, but have complex preferences that are not easily understood by observing the policies that they advocate for. In the private sector, profit is measured in money, and it’s generally safe to say that both parties to a transaction […]

Systemic bias against small scale development

In recent years, some of the country’s largest mixed-use real estate developments involved disposition of government-owned land directly to developers. For example, Atlantic Yards in Brooklyn and DC’s City Center and Marriott Marquis came about when municipal governments issued requests-for-proposals for underutilized land that they owned. Last week, Mid­Atlantic Realty Partners and Ellis Development Group closed on a deal to purchase 965 Florida Avenue NW from the District of Columbia. In 2012 the Office of the Deputy Mayor for Planning and Economic Development (DMPED) issued an RFP for this 1.45 acre at the intersection of the Shaw, U Street, and Columbia Heights neighborhoods. The RFP specified that any development on the site include affordable housing. Ultimately two developers submitted proposals. The winning developer purchased the land for just $400,000, at least $5 million less than appraisers estimated the land’s value to be, even after factoring in the affordable housing provision and needed environmental cleanup. By choosing to allocate very large parcels of land through this process rather than auctioning off small parcels of city-owned land, municipal officials favor large developers not only because smaller developers can’t afford such large parcels, but also because the RFP process favors established developers with political connections. In DC, large development firms provide some of the largest contributions to local campaigns. Not only does the sale of large parcels of public land exclude small developers who have less financial capital, it also reduces the pool of potential buyers to include only those with the political capital needed to navigate the RFP process. In the case of a private owner selling off a large tract of land, we would expect him to list the property for sale, accepting the best price he could get. If he thought smaller parcels would sell for more, the owner would likely try to subdivide before selling, expanding […]

Engineering in the dark

The similarities of urban design across American neighborhoods is no coincidence, but neither is it the result of city planners’ uniform adherence to best practices. Infrastructure is often built based on shockingly little information about the demands of its users. And while poorly reasoned infrastructure policy in one city is bad enough, the United States’ broad adherence to poorly reasoned policies has resulted in a nation in which swaths of neighborhoods are built on poor design foundations. Parking Requirements In The High Cost of Free Parking, Donald Shoup explains the origin of municipal parking requirements. Municipal planning offices do not have the resources to study the amount of parking that businesses should provide. Even with more staff, it’s not clear that planners would be able to determine optimal parking requirements unless they allowed business owners themselves to experiment and choose the amount of parking on their own in a learning process of how to best serve their customers. The Institute of Transportation Engineers is one of the only organizations that provides estimates of the number car trips that businesses generate. Given the lack of information planners have to determine parking requirements, they often rely on ITE’s information to set their parking requirements. However, ITE studies are often conducted at businesses that already provide ample free parking, ignoring the potential for businesses to manage demand for parking on their property through prices. Furthermore, ITE estimates of trip generation are typically based on a very small sample of locations, which are unlikely to be representative of businesses and cities in general. In the example below, the ITE provides a recommendation for fast food parking requirements based on their floor area. Even though the chart includes a line of best fit for the plot of peak parking spot occupation and floor area, the ITE hasn’t demonstrated a correlation between these two variables. Shoup points out: We cannot say much about how […]

The History of Progressive Housing Policy

Maya Dukmasova recently published at Slate an interesting piece about the potential for current trends in affordable housing policy to tear apart the social capital of low-income people. She makes the Ostromian point that policymakers’ lack of understanding of the informal institutions that govern communities makes it likely that government housing policies are likely to have unintended consequences. While Dukmasova aptly characterizes some of the problems with American anti-poverty programs to date, she gets some key history wrong. In particular, she writes: Part of the liberal establishment’s failure to address this problem stems from its inability to embrace truly progressive understandings of poverty. Those advocating for solutions to poverty rarely speak about the way our economy and social infrastructures entrench it. Rather, much of liberals’ efforts have been crippled by unexamined and unchallenged beliefs that the spaces where poor people of color live are morally compromised, beliefs summed up by one well-intentioned but ultimately damaging term: concentrated poverty. In fact, the programs that she criticizes directly grew out of progressive scholarship and politics. Nineteenth century progressives set their sights on demolishing tenements occupied by low-income, immigrant populations with the goal of relocating residents to suburban homes deemed healthier and better for the morals of their inhabitants. Jacob Riis’ influential work in How the Other Half Lives fueled a progressive movement to eradicate tenement housing, with activists motivated both by altruism toward the poor and by a fear of disease and cultural changes that immigrant-dominated neighborhoods brought. Riis became one of the first reformers demanding that “light and air” be a key consideration in new construction. While he used this phrase to campaign against unventilated tenements that actually did create unhealthy indoor conditions indoors, it ultimately provided the policy rationale for the the New York 1916 Zoning Resolution that would limit building height and massing to protect outdoor light and air, as if shade is […]

Free parking isn’t free

Last week I wrote a piece for City Journal on how smart parking could allow New York City to implement variable pricing. Street parking sensors allow prices to change to maintain an empty spot on each block, as parking expert Donald Shoup recommends. By eliminating the incentive to drive around looking for parking, this policy could drastically reduce traffic congestion and save drivers significant amounts of time. All of the comments on my post argue that charging for parking according to demand would increase the cost of living in already expensive cities and hurt low-income people. While this argument is very common among supporters of underpriced street parking, it’s false. In actuality, today’s standard policies of underpriced street parking and off-street parking requirements increase costs of living, and low-income people bear a disproportionate share of the costs of these policies. Properly implemented variable pricing systems may not even increase the total price that drivers pay to park their cars. San Francisco has gone farther than any other city to implement variable parking. Its SFPark system updates the prices on the city’s meters periodically with the goal of keeping the occupancy on each block below 80%. While this objective has led to significant price increases for the most in-demand blocks, it has actually reduced the city’s total parking meter revenues because prices were allowed to fall on many blocks to reach the 80% target. Whether they cause total parking meter revenue to increase or decrease, variable parking prices are key to reducing off-street parking requirements, which is a huge cost of development. The political pressure for off-street parking often stems from homeowners who live near commercial destinations. Because people who drive to businesses prefer free parking to paid parking, they may park in a zero-price curb spot in a residential neighborhood near their destination rather than at a […]

Urban Renewal in Philadelphia

The Philadelphia Housing Authority will seize  nearly 1,300 properties for a major urban renewal project in the city’s Sharswood neighborhood. The plan includes the demolition of two of the neighborhood’s three high-rise public housing buildings — the Blumberg towers — that will be replaced with a large mixed-income development. The new buildings will increase the neighborhood population tenfold with the majority of the new units to be affordable housing. The majority of the 1,300 lots slated for eminent domain are currently vacant. At a City Council hearing on Tuesday, Philadelphia Housing Authority CEO Kelvin Jeremiah testified that the redevelopment plan furthers the agency’s efforts to replace high-rise housing projects with lower-density units. However, PHA’s plan misses the forest for the trees. The benefits of demolishing the two towers are immediately undone by creating an entire neighborhood of public housing, effectively increasing the concentration of poverty in Sharswood. Adam Lang has lived in Sharswood for 10 years, and he posted about the plan in the Market Urbanism Facebook group. Adam has raised concerns that the PHA does not have an accurate number of how many of the 1,300 properties in the redevelopment territory are currently occupied. Adam’s primary residence is not under threat of eminent domain, however he owns four lots that are. He uses two lots adjacent to his home as his yard. The other two are a shell and a vacant lot. He purchased them, ironically, from the city with the plan to turn them into rentals. Adam’s concern about the inaccuracy of PHA’s vacancy statistics stem from the method that PHA employees used to create their estimate: driving by homes to see if they look occupied or not. Adam’s own property was on the list of vacants, and he said that he’s aware of other properties in the neighborhood […]

The benefits of the market in both infrastructure and urbanism

Alain Bertaud, a senior research scholar at the Urbanization Project, has had a long career in urban planning, and many of his writings have a market urbanist flavor. He is currently working a book called Order Without Design, and last year he published an excerpt from that book called “The Formation of Urban Spatial Structures: Market vs. Design.” In the article he offers a compelling case for letting the market determine building sizes and uses, but he argues that infrastructure provision must be left to the state. I agreed wholeheartedly with the first portion of his paper, but find that his arguments for the market in land use contradict his arguments for the state in infrastructure. Bertaud eloquently explains the knowledge problem facing urban planners who seek to regulate efficient land use patterns. Because economic growth is such a complex process that’s dynamic over time, he explains that top-down design will fail to keep up with changing land use needs to the detriment of economic growth. He cites Hartford, Connecticut as an example. The city developed a large insurance industry, but as it became profitable for American insurance companies to outsource clerical work abroad, fewer Connecticut residents find employment in the industry. However, in a futile effort to maintain jobs, urban planners have refused to update land use regulations to permit new employment opportunities. Rather than succeeding in keeping historical sources of employment in place, urban planners have prevented economic diversity that can hedge against a downturn in a specific industry. Bertaud describes price mechanism that allows the market to identify land’s highest value use: Markets …  recycle obsolete land use quasi-automatically through rising and falling prices. This constant land recycling is usually very positive for the longterm welfare of the urban population. In the short term, changes in  land use and in the spatial concentration of employment are disorienting and alarming for workers and […]

The Status of Smart Growth Regulation

Debates over land use policy often devolve into opponents arguing over how to interpret the same set of facts. For example, “market suburbanists” argue that because apartments in walkable neighborhoods tend to cost more per square foot than suburban single family homes, high densities make coastal cities expensive. Smart Growth advocates may look at the same data and argue that zoning rules that restrict the supply of high-density housing in desirable locations is what makes housing expensive. In order to provide clarity to the debate on land use regulations, Mike Lewyn and Kip Jackson survey the zoning codes of the 24 cities with populations between 500,000 and 1,000,000 residents. In their new Mercatus Center study, they find that while some cities have in fact enacted the sorts of policies that market suburbanists fear — minimum density requirements and maximum parking rules — these regulations remain very rare relative to near-ubiquitous maximum density rules and minimum parking requirements. Lewyn and Jackson list the mid-size cities that have adopted various types of Smart Growth regulations below. While a handful of cities have adopted the types of regulations they surveyed, every U.S. city in this sample has a maze of traditional zoning rules. A perpetual challenge in studying the effects of both traditional and Smart Growth regulations is finding data. Municipal codes are all housed on unique websites with varying degrees of accessibility. The difficulty of achieving clear answers as to what causes high housing prices contributes to advocates of traditional zoning and Smart Growth to shout past one another. While Smart Growth as a whole is maligned by some advocates of the free market, many Smart Growth tenets are actually deregulatory. Policy changes including upzoning, reducing parking requirements, and permitting mixed-use development are all steps toward laissez-faire land use relative to the status-quo, even though these policies […]

The importance of driverless trains

As Honolulu is making progress on its driverless elevated rail system under construction, Washington, DC is finally beginning to return to computer operation on its red line after a 2009 crash brought an end to reliance on the computerized system. While the move in DC will facilitate smoother driving and braking, WMATA still relies on train operators in the cabs, forgoing the cost-saving opportunity that driverless systems provide. It’s difficult to overstate the importance of driverless trains in the effort to bring U.S. transit operations down to a reasonable price. Driverless systems currently operate successfully in cities from Vancouver to Algiers, and the world’s most financially successful intracity transit systems in Hong Kong and Tokyo have embraced the technology. In spite of WMATA’s high profile accident that happened while the trains were computer-operated, a well-designed driverless system is actually safer than human operated one. Driverless systems offer a better ride quality, stay on time, and face a lower marginal cost of extending service hours. Labor costs make up huge shares of U.S. transit systems. In DC, for example, personnel costs make up 70% of the agency’s operating budget. In 2010, WMATA spent $38 million on the salaries of 611 train operators, and this does not include their retirement and health benefits. In New York, personnel costs make up $8.5 billion of the agency’s $11.5 billion operating costs, and in Chicago labor takes up 73% of CTA’s operating expenses. Obviously not all transit workers jobs can be automated (all of these systems have more bus drivers than train operators) and some operating costs would rise under a driverless system. But taking steps toward reducing labor — that comes at a premium in high-cost-of-living cities where transit is most important — is crucial for reducing transit’s operating costs and making transit systems financially sustainable. In all sorts of industries automation […]