<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Joomla! - Open Source Content Management" -->
<feed xmlns="http://www.w3.org/2005/Atom"  xml:lang="en-gb">
	<title type="text">Demos</title>
	<subtitle type="text">vision</subtitle>
	<link rel="alternate" type="text/html" href="http://escience.ime.usp.br"/>
	<id>http://escience.ime.usp.br/vision/demos/atom</id>
	<updated>2019-08-23T00:09:09+00:00</updated>
	<generator uri="http://joomla.org" version="2.5">Joomla! - Open Source Content Management</generator>
	<link rel="self" type="application/atom+xml" href="http://escience.ime.usp.br/vision/demos/atom"/>
	<entry>
		<title> Semi-automated Segmentation of Brain Tumor </title>
		<link rel="alternate" type="text/html" href="http://escience.ime.usp.br/vision/semi-automated-segmentation-of-brain-tumor"/>
		<published>2012-10-22T18:44:35+00:00</published>
		<updated>2012-10-22T18:44:35+00:00</updated>
		<id>http://escience.ime.usp.br/vision/semi-automated-segmentation-of-brain-tumor</id>
		<author>
			<name>Igor</name>
			<email>igordsm@gmail.com</email>
		</author>
		<summary type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;p id=&quot;title&quot;&gt;Segmentation of brain lesions from medical images is a difficult task to be mastered by the specialist. This is due to the presence of noise, partial volume effects and susceptibility artifacts in the images. These images also contain abnormalities in the distribution of the intensities of the white matter, gray matter and cerebrospinal fluid. All these problems can interfere with the results when manual segmentation is used. Manual segmentation uses local anatomical information based on the user experience; that implies the necessity of constant human intervention. Deformable model approaches (geometric and parametric) attempt to reduce these shortcomings by outlining the region of interest in a semi-automatic manner. These methods have been shown to be effective in the extraction of the lesion borders in brain MR images with reduced user intervention. However, due to the restrictions of the deformable models when dealing with regions without well defined edges, the proposal of this work is to apply the Mumford-Shah model via level set methods represented as geometrical deformable models, in order to segment multi-sequence magnetic resonance (MR) images of the brain composed of FLAIR (Fluid Attenuated Inversion Recovery), T1 and T2-weighted images. Results showed that segmentation using multi-sequence images provides superior results than using each sequence alone. As a part of this work, a software with a minimal human intervention has been developed to visualize and segment the brain lesions that appear as hyperintensities in MR images. As a consequence, medical doctors can exploit the segmentation results to follow up their patients by assessing the evolution or involution of the brain lesions.&lt;/p&gt;
&lt;p&gt;  &lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/vfvo92ZGWp0?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/vfvo92ZGWp0?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</summary>
		<content type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;p id=&quot;title&quot;&gt;Segmentation of brain lesions from medical images is a difficult task to be mastered by the specialist. This is due to the presence of noise, partial volume effects and susceptibility artifacts in the images. These images also contain abnormalities in the distribution of the intensities of the white matter, gray matter and cerebrospinal fluid. All these problems can interfere with the results when manual segmentation is used. Manual segmentation uses local anatomical information based on the user experience; that implies the necessity of constant human intervention. Deformable model approaches (geometric and parametric) attempt to reduce these shortcomings by outlining the region of interest in a semi-automatic manner. These methods have been shown to be effective in the extraction of the lesion borders in brain MR images with reduced user intervention. However, due to the restrictions of the deformable models when dealing with regions without well defined edges, the proposal of this work is to apply the Mumford-Shah model via level set methods represented as geometrical deformable models, in order to segment multi-sequence magnetic resonance (MR) images of the brain composed of FLAIR (Fluid Attenuated Inversion Recovery), T1 and T2-weighted images. Results showed that segmentation using multi-sequence images provides superior results than using each sequence alone. As a part of this work, a software with a minimal human intervention has been developed to visualize and segment the brain lesions that appear as hyperintensities in MR images. As a consequence, medical doctors can exploit the segmentation results to follow up their patients by assessing the evolution or involution of the brain lesions.&lt;/p&gt;
&lt;p&gt;  &lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/vfvo92ZGWp0?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/vfvo92ZGWp0?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</content>
		<category term="Demos" />
	</entry>
	<entry>
		<title> A Backmapping Approach for Graph-based Object Tracking </title>
		<link rel="alternate" type="text/html" href="http://escience.ime.usp.br/vision/a-backmapping-approach-for-graph-based-object-tracking"/>
		<published>2012-10-15T23:55:41+00:00</published>
		<updated>2012-10-15T23:55:41+00:00</updated>
		<id>http://escience.ime.usp.br/vision/a-backmapping-approach-for-graph-based-object-tracking</id>
		<author>
			<name>Igor</name>
			<email>igordsm@gmail.com</email>
		</author>
		<summary type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;T.M. Paixao, A.B.V. Graciano, R.M. Cesar-Jr, and R. Hirata. A Backmapping Approach for Graph-based Object Tracking. In C. Jung and M. Walter, editors, Proceedings, Los Alamitos, Oct. 12-15, 2008 2008. IEEE Computer Society.&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/U7Ia2Pwa4jg?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/U7Ia2Pwa4jg?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</summary>
		<content type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;T.M. Paixao, A.B.V. Graciano, R.M. Cesar-Jr, and R. Hirata. A Backmapping Approach for Graph-based Object Tracking. In C. Jung and M. Walter, editors, Proceedings, Los Alamitos, Oct. 12-15, 2008 2008. IEEE Computer Society.&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/U7Ia2Pwa4jg?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/U7Ia2Pwa4jg?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</content>
		<category term="Demos" />
	</entry>
	<entry>
		<title> Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification </title>
		<link rel="alternate" type="text/html" href="http://escience.ime.usp.br/vision/retinal-vessel-segmentation-using-the-2-d-gabor-wavelet-and-supervised-classification"/>
		<published>2012-10-15T23:54:50+00:00</published>
		<updated>2012-10-15T23:54:50+00:00</updated>
		<id>http://escience.ime.usp.br/vision/retinal-vessel-segmentation-using-the-2-d-gabor-wavelet-and-supervised-classification</id>
		<author>
			<name>Igor</name>
			<email>igordsm@gmail.com</email>
		</author>
		<summary type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;Video demonstration of our software developed for retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. More information can be found at the project's website: &lt;a href=&quot;http://www.retina.iv.fapesp.br/&quot;&gt;http://www.retina.iv.fapesp.br/&lt;/a&gt;&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/9flL-W9BZhU?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/9flL-W9BZhU?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</summary>
		<content type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;Video demonstration of our software developed for retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. More information can be found at the project's website: &lt;a href=&quot;http://www.retina.iv.fapesp.br/&quot;&gt;http://www.retina.iv.fapesp.br/&lt;/a&gt;&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/9flL-W9BZhU?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/9flL-W9BZhU?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</content>
		<category term="Demos" />
	</entry>
	<entry>
		<title>SegmentIt - Watershed Based Image Segmentation Tool</title>
		<link rel="alternate" type="text/html" href="http://escience.ime.usp.br/vision/demo-segmentit"/>
		<published>2012-10-15T23:53:24+00:00</published>
		<updated>2012-10-15T23:53:24+00:00</updated>
		<id>http://escience.ime.usp.br/vision/demo-segmentit</id>
		<author>
			<name>Nina</name>
			<email>nina@ime.usp.br</email>
		</author>
		<summary type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;This interactive image segmentation tool allows to switch back and forth between the watershed approaches (watershed from markers and hierarchical watershed) so the user can explore the strengths of both. Developed by Bruno Klava. See more at &lt;a href=&quot;http://watershed.sourceforge.net/&quot;&gt;http://watershed.sourceforge.net/&lt;/a&gt;&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/uELGhANsE64?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/uELGhANsE64?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</summary>
		<content type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div id=&quot;parent-fieldname-description&quot; class=&quot;documentDescription&quot;&gt;This interactive image segmentation tool allows to switch back and forth between the watershed approaches (watershed from markers and hierarchical watershed) so the user can explore the strengths of both. Developed by Bruno Klava. See more at &lt;a href=&quot;http://watershed.sourceforge.net/&quot;&gt;http://watershed.sourceforge.net/&lt;/a&gt;&lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;div class=&quot;documentDescription&quot;&gt; &lt;/div&gt;
&lt;p&gt;&lt;object style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; width=&quot;420&quot; height=&quot;315&quot; data=&quot;http://www.youtube.com/v/uELGhANsE64?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;src&quot; value=&quot;http://www.youtube.com/v/uELGhANsE64?version=3&amp;amp;hl=en_US&quot; /&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;/object&gt;&lt;/p&gt;&lt;/div&gt;</content>
		<category term="Demos" />
	</entry>
	<entry>
		<title>Fast Component-Based QR Code Detection in Arbitrarily Acquired Images</title>
		<link rel="alternate" type="text/html" href="http://escience.ime.usp.br/vision/fastqr"/>
		<published>2012-10-15T20:28:06+00:00</published>
		<updated>2012-10-15T20:28:06+00:00</updated>
		<id>http://escience.ime.usp.br/vision/fastqr</id>
		<author>
			<name>Nina</name>
			<email>nina@ime.usp.br</email>
		</author>
		<summary type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div class=&quot;documentDescription&quot;&gt;
&lt;table style=&quot;width: 820px;&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;http://www.springerlink.com/content/j257n751gg392850/&quot;&gt;http://www.springerlink.com/content/j257n751gg392850/&lt;/a&gt;&lt;br /&gt; Luiz F. F. Belussi and Nina S. T. Hirata&lt;/td&gt;
&lt;td align=&quot;right&quot;&gt;Journal of Mathematical Imaging and Vision&lt;br /&gt; 2012, DOI: 10.1007/s10851-012-0355-x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan=&quot;2&quot; align=&quot;justify&quot;&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;div id=&quot;content-core&quot;&gt;
&lt;div&gt; &lt;/div&gt;
&lt;div&gt;Sample images  for the method described in the article above.&lt;/div&gt;
&lt;div&gt; &lt;/div&gt;
&lt;div id=&quot;parent-fieldname-text&quot;&gt;&lt;a href=&quot;http://www.vision.ime.usp.br/%7Enina/QRcodeDetection/&quot;&gt;SIBGRAPI: Supplementary material (resulting image samples)&lt;/a&gt;
&lt;p&gt;&lt;a class=&quot;external-link&quot; href=&quot;http://www.vision.ime.usp.br/%7Ebelussi/supplementary_material_jmiv/&quot;&gt;&lt;span class=&quot;external-link&quot;&gt;JMIV: Supplementary material (resulting image samples)&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/div&gt;</summary>
		<content type="html">&lt;div class=&quot;feed-description&quot;&gt;&lt;div class=&quot;documentDescription&quot;&gt;
&lt;table style=&quot;width: 820px;&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;http://www.springerlink.com/content/j257n751gg392850/&quot;&gt;http://www.springerlink.com/content/j257n751gg392850/&lt;/a&gt;&lt;br /&gt; Luiz F. F. Belussi and Nina S. T. Hirata&lt;/td&gt;
&lt;td align=&quot;right&quot;&gt;Journal of Mathematical Imaging and Vision&lt;br /&gt; 2012, DOI: 10.1007/s10851-012-0355-x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan=&quot;2&quot; align=&quot;justify&quot;&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;div id=&quot;content-core&quot;&gt;
&lt;div&gt; &lt;/div&gt;
&lt;div&gt;Sample images  for the method described in the article above.&lt;/div&gt;
&lt;div&gt; &lt;/div&gt;
&lt;div id=&quot;parent-fieldname-text&quot;&gt;&lt;a href=&quot;http://www.vision.ime.usp.br/%7Enina/QRcodeDetection/&quot;&gt;SIBGRAPI: Supplementary material (resulting image samples)&lt;/a&gt;
&lt;p&gt;&lt;a class=&quot;external-link&quot; href=&quot;http://www.vision.ime.usp.br/%7Ebelussi/supplementary_material_jmiv/&quot;&gt;&lt;span class=&quot;external-link&quot;&gt;JMIV: Supplementary material (resulting image samples)&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/div&gt;</content>
		<category term="Demos" />
	</entry>
</feed>
