Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}
Modeling 3D objects from sketches is a process that requires several
challenging problems including segmentation, recognition and
reconstruction. Some of these tasks are harder for humans and some are
harder for the machine. At the core of the problem lies the need for
semantic understanding of the shape\\\'s geometry from the sketch. In this
paper we propose a method to model 3D objects from sketches by utilizing
humans specifically for semantic tasks that are very simple for humans
and extremely difficult for the machine, while utilizing the machine for
tasks that are harder for humans. The user assists recognition and
segmentation by choosing and placing specific geometric primitives on
the relevant parts of the sketch. The machine first snaps the primitive
to the sketch by fitting its projection to the sketch lines, and then
improves the model globally by inferring geosemantic constraints
that link the different parts. The fitting occurs in real-time, allowing
the user to be only as precise as needed to have a good starting
configuration for this non-convex optimization problem. We evaluate the
accessibility of our approach with a user study.
@Article{CG_0201,
author = {Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
title = {{G}eosemantic {S}napping for {S}ketch-{B}ased {M}odeling},
journal = {Computer Graphics Forum},
volume = {32},
number = {2},
month = {May},
year = {2013}
}