-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 40.2 KB
/
index.json
1
[{"authors":null,"categories":null,"content":"I am a AI Scientist at Paige AI. I did my Ph.D. with Jennifer Dy, Dana Brooks, and Jan-Willem van de Meent at Northeastern University. My main research interests are machine learning with emphasis on probabilistic programming, deep neural networks, and their applications in biomedical image processing. I am one of the developers of Probabilistic Torch, a library for deep generative models that extends PyTorch. I am also one of the maintainers of the Pytorch distributions module.\n","date":1610323200,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1610323200,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"","publishdate":"0001-01-01T00:00:00Z","relpermalink":"","section":"authors","summary":"I am a AI Scientist at Paige AI. I did my Ph.D. with Jennifer Dy, Dana Brooks, and Jan-Willem van de Meent at Northeastern University. My main research interests are machine learning with emphasis on probabilistic programming, deep neural networks, and their applications in biomedical image processing.","tags":null,"title":"Alican Bozkurt","type":"authors"},{"authors":["Alican Bozkurt","Babak Esmaeili","Jean-Baptiste Tristan","Jennifer Dy","Dana H. Brooks","Jan-Willem van de Meent"],"categories":null,"content":"","date":1610323200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1610323200,"objectID":"624802e79987c70c16a552e6a7931e41","permalink":"https://alicanb.github.io/publication/aistats-ratereg/","publishdate":"2021-01-11T00:00:00Z","relpermalink":"/publication/aistats-ratereg/","section":"publication","summary":"Variational autoencoders (VAEs) optimize an objective that comprises a reconstruction loss (the distortion) and a KL term (the rate). The rate is an upper bound on the mutual information, which is often interpreted as a regularizer that controls the degree of compression. We here examine whether inclusion of the rate term also improves generalization. We perform rate-distortion analyses in which we control the strength of the rate term, the network capacity, and the difficulty of the generalization problem. Lowering the strength of the rate term paradoxically improves generalization in most settings, and reducing the mutual information typically leads to underfitting. Moreover, we show that generalization performance continues to improve even after the mutual information saturates, indicating that the gap on the bound (i.e. the KL divergence relative to the inference marginal) affects generalization. This suggests that the standard spherical Gaussian prior is not an inductive bias that typically improves generalization, prompting further work to understand what choices of priors improve generalization in VAEs.","tags":["VAE"],"title":"Rate-Regularization and Generalization in VAEs","type":"publication"},{"authors":["Alican Bozkurt"],"categories":null,"content":" Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software. Create your slides in Markdown - click the Slides button to check out the example. Supplementary notes can be added here, including [code, math, and images](https://wowchemy.com/docs/writing-markdown-latex/).\r--\r","date":1586736000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1586736000,"objectID":"422b01dbe09bdfe1673aa0f5029d9baa","permalink":"https://alicanb.github.io/publication/phd_thesis/","publishdate":"2020-04-13T00:00:00Z","relpermalink":"/publication/phd_thesis/","section":"publication","summary":"The performance of any task depends on the representation of the data. A good representation should capture the factors of variation relevant to the task at hand while discarding the nuisance variables. Since this is task-specific, the common way to build representations had been to hand-engineer them using domain knowledge. Since the advent of deep learning, this paradigm has shifted in favor of learning the representations in tandem with the task. Whereas there has been remarkable progress in representation learning with deep networks for natural images, medical images do not benefit from this paradigm as much as natural images. This is due to a number of factors particular to this domain, including relative data scarcity, class imbalance (e.g. many more “normal” images than abnormal or containing disease), and objects or patterns of interest occurring at multiple scales and without clear boundaries. Another challenge for machine learning for medical images is that the tolerance for error is often lower compared to tasks involving natural images. As a result, representation learning for medical images still requires solutions that are tailored to the data and task at hand.\n\nIn this thesis, we develop and study learning representations from complex medical data that enable high performance in several downstream tasks e.g., sequence classification and semantic segmentation. We then look at a more abstract deep learning methodology, generalization in Variational Autoencoders (VAEs), motivated by the limitations of current approaches, to improve our understanding of the relationship between available training data and representation of the more general population of images from which the training data were sampled.\n\nThe medical imaging modality we look at is Reflectance Confocal Microscopy (RCM), which is an effective, non-invasive pre-screening tool for skin cancer diagnosis. However, RCM images require extensive training and experience to assess accurately. There are few quantitative tools available to standardize image acquisition and analysis, and the available ones are not interpretable. In the first part of this work, we use a RNN with attention on CNN features to delineate in an interpretable manner the skin strata in vertically-oriented stacks of transverse RCM image slices. We introduce a new attention mechanism called Toeplitz attention, which constrains the attention map to have a Toeplitz structure. Testing our model on an expert-labeled dataset of 504 RCM stacks, we achieve 88.07% image-wise classification accuracy, which is the current state of the art.\n\nIn the second part of this work, we developed two automated semantic segmentation methods called MU-Net and MED-Net that provide pixel-wise labeling of RCM images into classes of cell structure patterns. The novelty in our approach is the modeling of textural patterns at multiple resolutions, mimicking the traditional procedure for examining pathology images, which routinely starts with low magnification (low resolution, large field of view) followed by closer inspection of suspicious areas with higher magnification (higher resolution, smaller fields of view). We trained and tested our model on non-overlapping partitions of 117 RCM mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy. With patient-wise cross-validation, we achieved pixel-wise mean sensitivity and specificity of 70% and 95%, respectively, with a 0.71 Dice coefficient over six classes. In a second scenario, we partitioned the data by clinic or origin and tested the generalizability of the model across clinics. In this setting, we achieved pixel-wise mean sensitivity and specificity of 74% and 95%, respectively, with a 0.75 Dice coefficient. We compared MU-Net and MED-Net against the state-of-the-art semantic segmentation models and achieved better quantitative segmentation performance than previous approaches. Our results also suggest that, due to their nested multiscale architecture, our models annotated RCM mosaics more coherently, avoiding unrealistically fragmented annotations.\n\nLast, we examine the generalization of the latent representations in VAEs. The VAE objective combines a reconstruction loss (the distortion) and a KL divergence term (the rate) that is often interpreted as a regularizer. Our work re-examines this view. We perform rate-distortion analyses in which we control the strength of the KL term, the network capacity, and the difficulty of the generalization problem. Lowering the coefficient of the KL term lowers generalization in low capacity models, but paradoxically improves generalization in higher capacity models. Moreover, in easier generalization tasks (where the training set examples closely approximate test set examples), lowering the coefficient even improves generalization in low capacity models. These results show that the KL term does not improve generalization in terms of reconstruction loss. This suggests future work to investigate what inductive biases can aid generalization in this class of models.","tags":["RCM","VAE"],"title":"Deep Representation Learning for Complex Medical Images","type":"publication"},{"authors":["Kivanc Kose","Alican Bozkurt","Christi Alessi-Fox","Melissa Gill","Caterina Longo","Giovanni Pellacani","Jennifer Dy","Dana H. Brooks","Milind Rajadhyaksha"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"95a0416a73c45b81ab616a3b02cfef0d","permalink":"https://alicanb.github.io/publication/media-mednet/","publishdate":"2020-01-01T00:00:00Z","relpermalink":"/publication/media-mednet/","section":"publication","summary":"In-vivo optical microscopy is advancing into routine clinical practice for non-invasively guiding diagnosis and treatment of cancer and other diseases, and thus beginning to reduce the need for traditional biopsy. However, reading and analysis of the optical microscopic images are generally still qualitative, relying mainly on visual examination. Here we present an automated semantic segmentation method called 'Multiscale Encoder-Decoder Network (MED-Net)' that provides pixel-wise labeling into classes of patterns in a quantitative manner. The novelty in our approach is the modeling of textural patterns at multiple scales. This mimics the procedure for examining pathology images, which routinely starts with low magnification (low resolution, large field of view) followed by closer inspection of suspicious areas with higher magnification (higher resolution, smaller fields of view). We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy. With patient-wise cross-validation, we achieved pixel-wise mean sensitivity and specificity of 70±11% and 95±2%, respectively, with 0.71±0.09 Dice coefficient over six classes. In the scenario, we partitioned the data clinic-wise and tested the generalizability of the model over multiple clinics. In this setting, we achieved pixel-wise mean sensitivity and specificity of 74% and 95%, respectively, with 0.75 Dice coefficient. We compared MED-Net against the state-of-the-art semantic segmentation models and achieved better quantitative segmentation performance. Our results also suggest that, due to its nested multiscale architecture, the MED-Net model annotated RCM mosaics more coherently, avoiding unrealistic-fragmented annotations.","tags":["RCM"],"title":"Segmentation of Cellular Patterns in Confocal Images of Melanocytic Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net)","type":"publication"},{"authors":["Kivanc Kose","Alican Bozkurt","Christi Alessi-Fox","Dana H. Brooks","Jennifer Dy","Milind Rajadhyaksha","Melissa Gill"],"categories":null,"content":"","date":1576540800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1576540800,"objectID":"db65d7b841cad1d3a69506a81e5122ff","permalink":"https://alicanb.github.io/publication/jid-mednet/","publishdate":"2019-12-17T00:00:00Z","relpermalink":"/publication/jid-mednet/","section":"publication","summary":"In vivo reflectance confocal microscopy (RCM) enables clinicians to examine lesions’ morphological and cytological information in epidermal and dermal layers, while reducing the need for biopsies. As RCM is being adopted more widely, the workflow is expanding from real-time diagnosis at the bedside to include a “capture, store and forward” model with image interpretation and diagnosis occurring offsite, similar to radiology. As the patient may no longer be present at the time of image interpretation, quality assurance is key during image acquisition. Herein, we introduce a quality assurance process by means of automatically quantifying diagnostically uninformative areas within the lesional area, by using RCM and co-registered dermoscopy images together. We trained and validated a pixel-level segmentation model on 117 RCM mosaics collected by international collaborators. The model delineates diagnostically uninformative areas with 82% sensitivity and 93% specificity. We further tested the model on a separate set of 372 co-registered RCM-dermoscopic image pairs and illustrate how the results of the RCM only model can be improved via a multi-modal (RCM + Dermoscopy) approach, which can help quantify the uninformative regions within the lesional area. Our data suggest that machine learning based automatic quantification offers a feasible objective quality control measure for RCM imaging.","tags":["RCM"],"title":"Utilizing Machine Learning for Image Quality Assessment for Reflectance Confocal Microscopy","type":"publication"},{"authors":["Kivanc Kose","Alican Bozkurt","Jennifer Dy","Dana Brooks","Milind Rajadhyaksha"],"categories":null,"content":"","date":1565222400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1565222400,"objectID":"74a0179e4e6363c2cee052e866103557","permalink":"https://alicanb.github.io/publication/stanford-fac/","publishdate":"2019-08-08T00:00:00Z","relpermalink":"/publication/stanford-fac/","section":"publication","summary":"","tags":["RCM"],"title":"Facilitating the Adoption of Reflectance Confocal Microscopy (RCM) in Clinical Cancer Care Practice with Machine Learning","type":"publication"},{"authors":["Babak Esmaeili","Hao Wu","Sarthak Jain","Alican Bozkurt","N. Siddarth","Brooks Paige","Jennifer Dy","Dana H. Brooks","Jan-Willem van de Meent"],"categories":null,"content":"","date":1554336000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1554336000,"objectID":"bfcb6303a7aa315dbf67676985622571","permalink":"https://alicanb.github.io/publication/aistats-hfvae/","publishdate":"2019-04-04T00:00:00Z","relpermalink":"/publication/aistats-hfvae/","section":"publication","summary":"Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks. We derive this objective as a generalization of the evidence lower bound, which allows us to explicitly represent the trade-offs between mutual information between data and representation, KL divergence between representation and prior, and coverage of the support of the empirical data distribution. Experiments on a variety of datasets demonstrate that our objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.","tags":["VAE"],"title":"Structured Disentagled Representations","type":"publication"},{"authors":["Alican Bozkurt","Babak Esmaeili","Jennifer Dy","Dana H. Brooks","Jan-Willem van de Meent"],"categories":null,"content":"","date":1543881600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1543881600,"objectID":"ee9bd90faa18e7c1a08130e4f2627ed9","permalink":"https://alicanb.github.io/publication/cract-vae/","publishdate":"2018-12-04T00:00:00Z","relpermalink":"/publication/cract-vae/","section":"publication","summary":"An implicit goal in works on deep generative models is that such models should be able to generate novel examples that were not previously seen in the training data. In this paper, we investigate to what extent this property holds for widely employed variational autoencoder (VAE) architectures. VAEs maximize a lower bound on the log marginal likelihood, which implies that they will in principle overfit the training data when provided with a sufficiently expressive decoder. In the limit of an infinite capacity decoder, the optimal generative model is a uniform mixture over the training data. More generally, an optimal decoder should output a weighted average over the examples in the training data, where the magnitude of the weights is determined by the proximity in the latent space. This leads to the hypothesis that, for a sufficiently high capacity encoder and decoder, the VAE decoder will perform nearest-neighbor matching according to the coordinates in the latent space. To test this hypothesis, we investigate generalization on the MNIST dataset. We consider both generalization to new examples of previously seen classes, and generalization to the classes that were withheld from the training set. In both cases, we find that reconstructions are closely approximated by nearest neighbors for higher-dimensional parameterizations. When generalizing to unseen classes however, lower-dimensional parameterizations offer a clear advantage.","tags":["VAE"],"title":"Can VAEs generate novel examples?","type":"publication"},{"authors":["Alican Bozkurt","Kivanc Kose","Christi Alessi-Fox","Melissa Gill","Dana H. Brooks","Jennifer G. Dy","Milind Rajadhyaksha"],"categories":null,"content":"","date":1534982400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1534982400,"objectID":"dd309728546f89943c75d9fbffb41cfb","permalink":"https://alicanb.github.io/publication/miccai-munet/","publishdate":"2018-08-23T00:00:00Z","relpermalink":"/publication/miccai-munet/","section":"publication","summary":"We describe a new multiresolution 'nested encoder-decoder' convolutional network architecture and use it to annotate morphological patterns in reflectance confocal microscopy (RCM) images of human skin for aiding cancer diagnosis. Skin cancers are the most common types of cancers, melanoma being the deadliest among them. RCM is an effective, non-invasive pre-screening tool for skin cancer diagnosis, with the required cellular resolution. However, images are complex, low-contrast, and highly variable, so that clinicians require months to years of expert-level training to be able to make accurate assessments. In this paper, we address classifying 4 key clinically important structural/textural patterns in RCM images. The occurrence and morphology of these patterns are used by clinicians for diagnosis of melanomas. The large size of RCM images, the large variance of pattern size, the large-scale range over which patterns appear, the class imbalance in collected images, and the lack of fully-labeled images all make this a challenging problem to address, even with automated machine learning tools. We designed a novel nested U-net architecture to cope with these challenges, and a selective loss function to handle partial labeling. Trained and tested on 56 melanoma-suspicious, partially labeled, 12k x 12k pixel images, our network automatically annotated diagnostic patterns with high sensitivity and specificity, providing consistent labels for unlabeled sections of the test images. Providing such annotation will aid clinicians in achieving diagnostic accuracy, and perhaps more important, dramatically facilitate clinical training, thus enabling much more rapid adoption of RCM into widespread clinical use process. In addition, our adaptation of U-net architecture provides an intrinsically multiresolution deep network that may be useful in other challenging biomedical image analysis applications. *First two authors share first authorship.*","tags":["RCM"],"title":"A Multiresolution Convolutional Neural Network with Partial Label Training for Annotating Reflectance Confocal Microscopy Images of Skin","type":"publication"},{"authors":["Kivanc Kose","Alican Bozkurt","Christi Alessi-Fox","Melissa Gill","Dana H. Brooks","Jennifer Dy","Milind Rajadhyaksha"],"categories":null,"content":"","date":1522972800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1522972800,"objectID":"fc2021bcc1ad6693522278d6c2bebb98","permalink":"https://alicanb.github.io/publication/osa-munet/","publishdate":"2018-04-06T00:00:00Z","relpermalink":"/publication/osa-munet/","section":"publication","summary":"Morphological tissue patterns in RCM images are critical in diagnosis of melanocytic lesions. We present a multiresolution deep learning framework that can automatically annotate RCM images for these diagnostic patterns with high sensitivity and specificity.","tags":["RCM"],"title":"A Multiresolution Deep Learning Framework for Automated Annotation of Reflectance Confocal Microscopy Images","type":"publication"},{"authors":["Alican Bozkurt","Kivanc Kose","Jaume Coll-Font","Christi Alessi-Fox","Dana H. Brooks","Jennifer Dy","Milind Rajadhyaksha"],"categories":null,"content":"","date":1500595200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1500595200,"objectID":"d046a88f21790edc298915aabea5fbd5","permalink":"https://alicanb.github.io/publication/ml4h-dej/","publishdate":"2017-07-21T00:00:00Z","relpermalink":"/publication/ml4h-dej/","section":"publication","summary":"Reflectance confocal microscopy (RCM) is an effective, non-invasive pre-screening tool for skin cancer diagnosis, but it requires extensive training and experience to assess accurately. There are few quantitative tools available to standardize image acquisition and analysis, and the ones that are available are not interpretable. In this study, we use a recurrent neural network with attention on convolutional network features. We apply it to delineate skin strata in vertically-oriented stacks of transverse RCM image slices in an interpretable manner. We introduce a new attention mechanism called Toeplitz attention, which constrains the attention map to have a Toeplitz structure. Testing our model on an expert labeled dataset of 504 RCM stacks, we achieve 88.17% image-wise classification accuracy, which is the current state-of-art.","tags":["RCM"],"title":"Delineation of Skin Strata in Reflectance Confocal Microscopy Images using Recurrent Convolutional Networks with Toeplitz Attention","type":"publication"},{"authors":["Alican Bozkurt","Trevor Gale","Kivanc Kose","Christi Alessi-Fox","Dana H. Brooks","Milind Rajadhyaksha","Jennifer Dy"],"categories":null,"content":"","date":1500595200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1500595200,"objectID":"39568413c55b2c8b899fbca3facfebf5","permalink":"https://alicanb.github.io/publication/cvmi-dej/","publishdate":"2017-07-21T00:00:00Z","relpermalink":"/publication/cvmi-dej/","section":"publication","summary":"Reflectance confocal microscopy (RCM) is an effective, non-invasive pre-screening tool for cancer diagnosis. However, acquiring and reading RCM images requires extensive training and experience, and novice clinicians exhibit high variance in diagnostic accuracy. Consequently, there is a compelling need for quantitative tools to standardize image acquisition and analysis. In this study, we use deep recurrent convolutional neural networks to delineate skin strata in stacks of RCM images collected at consecutive depths. To perform diagnostic analysis, clinicians collect RCM images at 4-5 specific layers in the tissue. Our model automates this process by discriminating between RCM images of different layers. Testing our model on an expert labeled dataset of 504 RCM stacks, we achieve 87.97% classification accuracy, and 9-fold reduction in the number of anatomically impossible errors compared to the previous state-of-the-art.","tags":["RCM"],"title":"Delineation of Skin Strata in Reflectance Confocal Microscopy Images With Recurrent Convolutional Networks","type":"publication"},{"authors":["Kivanc Kose","Alican Bozkurt","Setareh Ariafar","Christi Alessi-Fox","Melissa Gill","Jennifer Dy","Dana H. Brooks","Milind Rajadhyaksha"],"categories":null,"content":"","date":1485561600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1485561600,"objectID":"56c8336f464525141faf7e52d0ced160","permalink":"https://alicanb.github.io/publication/spie-mosaic/","publishdate":"2017-01-28T00:00:00Z","relpermalink":"/publication/spie-mosaic/","section":"publication","summary":"In this study we present a deep learning based classification algorithm for discriminating morphological patterns that appear in RCM mosaics of melanocytic lesions collected at the dermal epidermal junction (DEJ). These patterns are classified into 6 distinct types in the literature: background, meshwork, ring, clod, mixed, and aspecific. Clinicians typically identify these morphological patterns by examination of their textural appearance at 10X magnification. To mimic this process we divided mosaics into smaller regions, which we call tiles, and classify each tile in a deep learning framework. We used previously acquired DEJ mosaics of lesions deemed clinically suspicious, from 20 different patients, which were then labelled according to those 6 types by 2 expert users. We tried three different approaches for classification, all starting with a publicly available convolutional neural network (CNN) trained on natural image, consisting of a series of convolutional layers followed by a series of fully connected layers: (1) We fine-tuned this network using training data from the dataset. (2) Instead, we added an additional fully connected layer before the output layer network and then re-trained only last two layers, (3) We used only the CNN convolutional layers as a feature extractor, encoded the features using a bag of words model, and trained a support vector machine (SVM) classifier. Sensitivity and specificity were generally comparable across the three methods, and in the same ranges as our previous work using SURF features with SVM . Approach (3) was less computationally intensive to train but more sensitive to unbalanced representation of the 6 classes in the training data. However we expect CNN performance to improve as we add more training data because both the features and the classifier are learned jointly from the data.","tags":[],"title":"Deep learning based classification of morphological patterns in reflectance confocal microscopy to guide noninvasive diagnosis of melanocytic lesions","type":"publication"},{"authors":["Alican Bozkurt","Kivanc Kose","Christi Alessi-Fox","Jennifer Dy","Dana H. Brooks","Milind Rajadhyaksha"],"categories":null,"content":"","date":1470960000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1470960000,"objectID":"ad545db59e2c90926f90f7e61d8c2f9a","permalink":"https://alicanb.github.io/publication/srt-sc/","publishdate":"2016-08-12T00:00:00Z","relpermalink":"/publication/srt-sc/","section":"publication","summary":"Measuring the thickness of the stratum corneum (SC) in vivo is often required in pharmacological, dermatological, and cosmetological studies. Reflectance confocal microscopy (RCM) offers a non-invasive imaging-based approach. However, RCM-based measurements currently rely on purely visual analysis of images, which is time-consuming and suffers from inter-user subjectivity. We developed an unsupervised segmentation algorithm that can automatically delineate the SC layer in stacks of RCM images of human skin. We represent the unique textural appearance of SC layer using complex wavelet transform and distinguish it from deeper granular layers of skin using spectral clustering. Moreover, through localized processing in a matrix of small areas (called ‘tiles’), we obtain lateral variation of SC thickness over the entire field of view. On a set of 15 RCM stacks of normal human skin, our method estimated SC thickness with a mean error of 5.4 ± 5.1 μm compared to the ‘ground truth’ segmentation obtained from a clinical expert. Our algorithm provides a non-invasive RCM imaging-based solution which is automated, rapid, objective, and repeatable.","tags":["RCM"],"title":"Unsupervised delineation of stratum corneum using reflectance confocal microscopy and spectral clustering","type":"publication"},{"authors":["J.P. Campbell","E. Ataer-Cansizoglu","V. Bolon-Canedo","Alican Bozkurt","D. Erdogmus","J. Kalpathy-Cramer","S.N. Patel","J.D. Reynolds","and others"],"categories":null,"content":"","date":1464739200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1464739200,"objectID":"c0e8686f5a0c2c28f90b6d3c7b63bda5","permalink":"https://alicanb.github.io/publication/opthalmology-rop/","publishdate":"2016-06-01T00:00:00Z","relpermalink":"/publication/opthalmology-rop/","section":"publication","summary":"**Importance**: Published definitions of plus disease in retinopathy of prematurity (ROP) reference arterial tortuosity and venous dilation within the posterior pole based on a standard published photograph. One possible explanation for limited interexpert reliability for a diagnosis of plus disease is that experts deviate from the published definitions.\n\n**Objective**: To identify vascular features used by experts for diagnosis of plus disease through quantitative image analysis. *Design, Setting, and Participants: A computer-based image analysis system (Imaging and Informatics in ROP [i-ROP]) was developed using a set of 77 digital fundus images, and the system was designed to classify images compared with a reference standard diagnosis (RSD). System performance was analyzed as a function of the field of view (circular crops with a radius of 1-6 disc diameters) and vessel subtype (arteries only, veins only, or all vessels). Routine ROP screening was conducted from June 29, 2011, to October 14, 2014, in neonatal intensive care units at 8 academic institutions, with a subset of 73 images independently classified by 11 ROP experts for validation. The RSD was compared with the majority diagnosis of experts.\n\n**Main Outcomes and Measures**: The primary outcome measure was the percentage of accuracy of the i-ROP system classification of plus disease, with the RSD as a function of the field of view and vessel type. Secondary outcome measures included the accuracy of the 11 experts compared with the RSD.\n\n**Results**: Accuracy of plus disease diagnosis by the i-ROP computer-based system was highest (95%; 95% CI, 94%-95%) when it incorporated vascular tortuosity from both arteries and veins and with the widest field of view (6–disc diameter radius). Accuracy was 90% or less when using only arterial tortuosity and 85% or less using a 2– to 3–disc diameter view similar to the standard published photograph. Diagnostic accuracy of the i-ROP system (95%) was comparable to that of 11 expert physicians (mean 87%, range 79%-99%).\n\n**Conclusions and Relevance**: Experts in ROP appear to consider findings from beyond the posterior retina when diagnosing plus disease and consider tortuosity of both arteries and veins, in contrast with published definitions. It is feasible for a computer-based image analysis system to perform comparably with ROP experts, using manually segmented images.","tags":["ROP"],"title":"Expert diagnosis of plus disease in retinopathy of prematurity from computer-based image analysis","type":"publication"},{"authors":["E. Ataer-Cansizoglu","V. Bolon-Canedo","J.P. Campbell","Alican Bozkurt","D. Erdogmus","J. Kalpathy-Cramer","S.N. Patel","K. Jonas","R.V.P. Chan","S. Ostmo","M.F. Chiang","on behalf of the i-ROP Research Consortium"],"categories":null,"content":"","date":1446336000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1446336000,"objectID":"a7258cf6342c85a0d9d489ff1699ca9a","permalink":"https://alicanb.github.io/publication/tvst-rop/","publishdate":"2015-11-01T00:00:00Z","relpermalink":"/publication/tvst-rop/","section":"publication","summary":"**Purpose**: We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis.\n\n**Methods**: A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the “i-ROP” system.\n\n**Results**: Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%).\n\n**Conclusions**: This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Translational Relevance: Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.","tags":["ROP"],"title":"Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the “i-ROP” System and Image Features Associated With Expert Diagnosis","type":"publication"},{"authors":["Alican Bozkurt","Pinar Duygulu","A. Enis Cetin"],"categories":null,"content":"","date":1437609600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1437609600,"objectID":"69c11c9eabb241abf55f9c4b2e81abe2","permalink":"https://alicanb.github.io/publication/sivp-fonts/","publishdate":"2015-07-23T00:00:00Z","relpermalink":"/publication/sivp-fonts/","section":"publication","summary":"Recognizing fonts has become an important task in document analysis, due to the increasing number of available digital documents in different fonts and emphases. A generic font recognition system independent of language, script and content is desirable for processing various types of documents. At the same time, categorizing calligraphy styles in handwritten manuscripts is important for paleographic analysis, but has not been studied sufficiently in the literature. We address the font recognition problem as analysis and categorization of textures. We extract features using complex wavelet transform and use support vector machines for classification. Extensive experimental evaluations on different datasets in four languages and comparisons with state-of-the-art studies show that our proposed method achieves higher recognition accuracy while being computationally simpler. Furthermore, on a new dataset generated from Ottoman manuscripts, we show that the proposed method can also be used for categorizing Ottoman calligraphy with high accuracy.","tags":[],"title":"Classifying fonts and calligraphy styles using complex wavelet transform","type":"publication"},{"authors":["Alican Bozkurt","Alexander Suhre","A. Enis Cetin"],"categories":null,"content":"","date":1407369600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1407369600,"objectID":"18f962f94d8bc563ecec330ab6e0d5e6","permalink":"https://alicanb.github.io/publication/sivp-fl/","publishdate":"2014-08-07T00:00:00Z","relpermalink":"/publication/sivp-fl/","section":"publication","summary":"Follicular lymphoma (FL) is a group of malignancies of lymphocyte origin that arise from lymph nodes, spleen, and bone marrow in the lymphatic system. It is the second most common non-Hodgkins lymphoma. Characteristic of FL is the presence of follicle center B cells consisting of centrocytes and centroblasts. Typically, FL images are graded by an expert manually counting the centroblasts in an image. This is time consuming. In this paper, we present a novel multi-scale directional filtering scheme and utilize it to classify FL images into different grades. Instead of counting the centroblasts individually, we classify the texture formed by centroblasts. We apply our multi-scale directional filtering scheme in two scales and along eight orientations, and use the mean and the standard deviation of each filter output as feature parameters. For classification, we use support vector machines with the radial basis function kernel. We map the features into two dimensions using linear discriminant analysis prior to classification. Experimental results are presented.","tags":[],"title":"Multi-scale directional-filtering-based method for follicular lymphoma grading","type":"publication"},{"authors":["Mohamed Tofighi","Alican Bozkurt","Kivanc Kose","A. Enis Cetin"],"categories":null,"content":"","date":1402531200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1402531200,"objectID":"6adc4693e21ecfb8d44bf828c34388fc","permalink":"https://alicanb.github.io/publication/siu-pocs-deconvolution/","publishdate":"2014-06-12T00:00:00Z","relpermalink":"/publication/siu-pocs-deconvolution/","section":"publication","summary":"A new deconvolution algorithm based on making orthogonal projections onto the epigraph set of a convex cost function is presented. In this algorithm, the dimension of the minimization problem is lifted by one and sets corresponding to the cost function and observations are defined. If the utilized cost function is convex in $R\\^N$, the corresponding epigraph set is also convex in $R\\^{N+1}$. The deconvolution algorithm starts with an arbitrary initial estimate in $R\\^{N+1}$. At each iteration cycle of the algorithm, first deconvolution projections are performed onto the hyperplanes representing observations, then an orthogonal projection is performed onto epigraph of the cost function. The method provides globally optimal solutions for total variation, $\\ell\\_1$, $\\ell\\_2$, and entropic cost functions.","tags":[],"title":"Deconvolution using projections onto the epigraph set of a convex cost function","type":"publication"},{"authors":["A. Enis Cetin","Alican Bozkurt","Osman Gunay","Y. Hakan Habiboglu","Kivanc Kose","Ibrahim Onaran","Mohammad Tofighi","Rasim Akin Sevimli"],"categories":null,"content":"","date":1392249600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1392249600,"objectID":"bba254ba1eb9795b9516fa305a3b5dcb","permalink":"https://alicanb.github.io/publication/globalsip-pocs/","publishdate":"2014-02-13T00:00:00Z","relpermalink":"/publication/globalsip-pocs/","section":"publication","summary":"A new optimization technique based on the projections onto convex space (POCS) framework for solving convex and some non-convex optimization problems are presented. The dimension of the minimization problem is lifted by one and sets corresponding to the cost function are defined. If the cost function is a convex function in $R^N$ the corresponding set which is the epigraph of the cost function is also a convex set in $R^{N+1}$. The iterative optimization approach starts with an arbitrary initial estimate in $R^{N+1}$ and an orthogonal projection is performed onto one of the sets in a sequential manner at each step of the optimization problem. The method provides globally optimal solutions in total-variation, filtered variation, $\\ell\\_1$, and entropic cost functions. It is also experimentally observed that cost functions based on $\\ell\\_p$; $p \\leq 1$ may be handled by using the supporting hyperplane concept. The new POCS based method can be used in image deblurring, restoration and compressive sensing problems.","tags":[],"title":"Projections onto convex sets (POCS) based optimization by lifting","type":"publication"},{"authors":null,"categories":null,"content":"","date":-62135596800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":-62135596800,"objectID":"f26b5133c34eec1aa0a09390a36c2ade","permalink":"https://alicanb.github.io/admin/config.yml","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/admin/config.yml","section":"","summary":"","tags":null,"title":"","type":"wowchemycms"}]