Using
an input image, the Tree-D Fusion creates a 3D tree model that can be used to
simulate various stages of development. Credit: Tree-D Fusion Team
Trees
compete for space as they grow. A tree with branches close to a wall will
develop differently from one growing on open ground.
Now everyone—from urban planners and environmental scientists to homeowners—can access a new algorithm for
tree reconstruction developed at Purdue University to see how the trees will
shade an area or learn what a tree will look like in 20 years. Purdue computer
scientists and digital foresters have used artificial intelligence to generate this first-ever database, which
contains three-dimensional models of more than 600,000 real trees.
"These tree models are what we call 'simulation-ready,'" said Purdue's Bedrich Benes, professor and associate head of the Department of Computer Science in the College of Science and a member of the Institute for Digital Forestry. The database and related code are publicly available.
Tree-D Fusion can reconstruct 3D simulation-ready
tree models from a single image, like these examples using a Google Street View
image. Credit: Tree-D Fusion Team
Benes and colleagues at Purdue's
Institute for Digital Forestry, Google and the Massachusetts Institute of
Technology described the details of their Tree-D
Fusion algorithm
in the conference proceedings of the European
Conference on Computer Vision (ECCV), 2024.
"Trees provide immense and
essential value to human society and underpin diverse ecosystems worldwide.
They cool the environment, improve air quality, capture carbon dioxide, produce
oxygen, and have a positive effect on human physical and mental health,"
the co-authors wrote. "The complex effect of trees on the environment has
been studied for centuries. Currently, computational models that seek to
understand these relationships are hindered by a lack of data."
The team used the data from
the Auto Arborist Dataset introduced by a Google Research team in 2022. The dataset consisted of about
2.6 million trees belonging to 32 genus-level categories from 23 North American
cities.
"The
particular challenge of this project was getting the three-dimensional model
from a single image," Benes said. "There is not enough input data to
extract high-detail information. The generated tree models are approximations.
We don't claim that these are perfect digital twins, but they are useful, for
example, for estimating the shading in urban areas."
"We
leverage recent advances in diffusion models to provide prior information for
3D tree reconstruction," said Raymond Yeh, assistant professor of computer
science, who leads the computer vision and AI efforts of the project.
Tree-D Fusion
will offer more in the future, said the study's lead author, Purdue's Jae Joong
Lee, a Ph.D. student in Benes's Computational Vegetation Group and a member of
the Institute for Digital Forestry. "Together with my collaborators, I
envision expanding the platform's capabilities to a planetary scale. Our goal
is to use AI-driven insights on social and environmental benefits on a large
scale," Lee said.
Additional
co-authors include Purdue's Bosheng Li, Raymond Yeh and Songlin Fei, all
members of the Institute for Digital Forestry; Sara Beery of Massachusetts
Institute of Technology; and Jonathan Huang, formerly of Google, now head of AI
at Scaled Foundations.
"One goal
of Digital Forestry is to improve societal human well-being. We have different
projects working on tree localization and inventory from smartphone to
satellite," said Fei, the institute's director and Dean's Chair in Remote
Sensing.
"This
project provides contextualized information on urban tree structure that can be
done at scale, providing managers critical information to better manage urban
trees. With continued progress on this and other projects, we aim to help make
our cities greener, smarter and healthier, tree by tree."
The initial
data for Tree-D Fusion came from public tree census records that many cities
maintain online. The Google Research team then merged the tree census data with
Street View and overhead color imagery. This made available a large-scale,
computer-vision tree monitoring tool for the first time. Researchers at MIT's
Senseable City Lab have already used the new 3D tree models to plot shaded
walking routes through select cities.
"Every
time a tree-mapping vehicle passes through a city now, we're not just taking
snapshots—we're watching these urban forests evolve in real time," said
Beery, assistant professor in the MIT Electrical Engineering and Computer
Science Department.
"This
continuous monitoring creates a living digital forest that mirrors its physical
counterpart, offering cities a powerful lens to observe how environmental
stresses shape tree health and growth patterns across the urban
landscape."
Comparisons
between other 3D reconstruction methods and Tree-D Fusion showed that the
latter performs better in many aspects, including projected shading, which is
important in planning for green cities.
"The AI model we used was quite computationally demanding," Benes said. Calculating the entire dataset with a single graphics processing unit (GPU) would have taken about 23 years. Even using all nine of the supercomputing clusters that Purdue's Rosen Center for Advanced Computing had at the time—now it has ten—the calculations took nearly six months to complete.
by Steve Koppes, Purdue University
Source: A 3D tree reconstruction algorithm contributes to a new era of urban planning
No comments:
Post a Comment