Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
Photo taken after the last tutorial of course ESE415 Optimization.
Published:
Talk at ICASSP 2019 in Brighton, UK.
Published in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019
Stochastic gradient descent (SGD) is one of the most widely used optimization methods for parallel and distributed processing of large datasets. One of the key limitations of distributed SGD is the need to regularly communicate the gradients between different computation nodes. To reduce this communication bottleneck, recent work has considered a one-bit variant of SGD, where only the sign of each gradient element is used in optimization. In this paper, we extend this idea by proposing a stochastic variant of the proximal-gradient method that also uses one-bit per update element. We prove the theoretical convergence of the method for non-convex optimization under a set of explicit assumptions. Our results indicate that the compressed method can match the convergence rate of the uncompressed one, making the proposed method potentially appealing for distributed processing of large datasets.
Recommended citation: Xu, X., & Kamilov, U. S. (2019, April). signProx: One-bit proximal algorithm for nonconvex stochastic optimization. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 7800-7804). IEEE. https://ieeexplore.ieee.org/abstract/document/8682059
Published in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019
In the past decade, sparsity-driven regularization has led to significant improvements in image reconstruction. Traditional regularizers, such as total variation (TV), rely on analytical models of sparsity. However, increasingly the field is moving towards trainable models, inspired from deep learning. Deep image prior (DIP) is a recent regularization framework that uses a convolutional neural network (CNN) architecture without data-driven training. This paper extends the DIP framework by combining it with the traditional TV regularization. We show that the inclusion of TV leads to considerable performance gains when tested on several traditional restoration tasks such as image denoising and deblurring.
Recommended citation: Liu, J., Sun, Y., Xu, X., & Kamilov, U. S. (2019, April). Image restoration using total variation regularized deep image prior. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 7715-7719). IEEE. https://ieeexplore.ieee.org/abstract/document/8682856
Published in IGARSS 2020 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2020
In this paper, we develop a robust three dimensional tomographic imaging framework to estimate the ionospheric electron density using ground-based total electron content (TEC) measurements from GPS receivers. In order to increase the sampling rate of the domain, we incorporate into the tomographic measurements the TEC readings observed from low-angle satellites that fall outside of the target ionospheric domain. We discount the proportion of the TEC measurements that originate outside of the target domain using the simulation-based NeQuick2 model as reference. We also employ a diffusion kernel regularization function to robustify the reconstruction against errors in the NeQuick2 model. Finally, we demonstrate through simulations that our framework delivers superior reconstruction of the ionospheric electron density compared to existing schemes. We also demonstrate the applicability of our approach on real TEC measurements.
Recommended citation: Xu X, Dhifallah O, Mansour H, Boufounos PT, Orlik PV. Robust 3D Tomographic Imaging of the Iononspheric Electron Density. https://merl.com/publications/docs/TR2020-113.pdf
Published in Ieee Signal Processing Letters, 2020
Plug-and-play priors (PnP) is a methodology for regularized image reconstruction that specifies the prior through an image denoiser. While PnP algorithms are well understood for denoisers performing \emph{maximum a posteriori probability (MAP)} estimation, they have not been analyzed for the \emph{minimum mean squared error (MMSE)} denoisers. This letter addresses this gap by establishing the first theoretical convergence result for the iterative shrinkage/thresholding algorithm (ISTA) variant of PnP for MMSE denoisers. We show that the iterates produced by PnP-ISTA with an MMSE denoiser converge to a stationary point of some global cost function. We validate our analysis on sparse signal recovery in compressive sensing by comparing two types of denoisers, namely the \emph{exact} MMSE denoiser and the \emph{approximate} MMSE denoiser obtained by training a deep neural net.
Recommended citation: X. Xu, Y. Sun, J. Liu, B. Wohlberg and U. S. Kamilov, "Provable Convergence of Plug-and-Play Priors With MMSE Denoisers," in IEEE Signal Processing Letters, vol. 27, pp. 1280-1284, 2020, doi: 10.1109/LSP.2020.3006390. https://ieeexplore.ieee.org/document/9130860
Published in 54th Asilomar Conference on Signals, Systems and Computers, 2020
Plug-and-play priors (PnP) is an image reconstruction framework that uses an image denoiser as an imaging prior. Unlike traditional regularized inversion, PnP does not require the prior to be expressible in the form of a regularization function. This flexibility enables PnP algorithms to exploit the most effective image denoisers, leading to their state-of-the-art performance in various imaging tasks. However, many powerful denoisers, such as the ones based on convolutional neural networks (CNNs), do not have tunable parameters that would allow controlling their influence within PnP. To address this issue, in this paper, we introduce a scaling parameter that adjusts the magnitude of the denoiser input and output. We theoretical justify the denoiser scaling from the perspectives of proximal optimization, statistical estimation, and consensus equilibrium. Finally, we provide numerical experiments demonstrating the ability of denoiser scaling to systematically improve the performance of PnP for denoising CNN priors that do not have explicitly tunable parameters.
Recommended citation: X. Xu, J. Liu, Y. Sun, B. Wohlberg, and U. S. Kamilov, “Boosting the Performance of Plug-and-Play Priors via Denoiser Scaling,” 2020, arXiv:2002.11546. https://arxiv.org/abs/2002.11546
Published:
This poster was given in the section “Computational Imaging” at ICASSP 2019.
Published:
This talk was given in the section “Recent Advances in Signal Processing for Large-Scale Computational Imaging” at ICASSP 2019.
Published:
This talk was given in the session “3D Terrain Mapping / Tomographic Imaging of Forest and Ionosphere” at IGARSS 2020.
Published:
This talk was given in the session “H6-2: Learning from Light: Where Computer Vision and Machine Learning Meets Optics and Imaging” at 54th Asilomar Conference.
Undergraduate & Graduate course, Washington University in St. Louis, Electrical & Systems Engineering, 2019
I was the head TA of the course “ESE415 optimization” taught by Prof. Kamilov. I served as the head assistant instructor and guest lecturer for course “Optimization” and “Large-Scale Optimization for Data Science” and obtained high evaluation from students.
Undergraduate & Graduate course, Washington University in St. Louis, Electrical & Systems Engineering, Computer Science & Engineering, 2020
I was the TA of the course “CSE 534A/ESE 513 Large-Scale Optimization” taught by Prof. Kamilov. I answer questions online and help with the arrangement of courses.