- Date & Time: Thursday, March 8, 2012; 9:30 AM
Speaker: Prof. Masayuki Inaba, Professor, Director of JSK Robotics Lab<br />
Department of Creative Informatics<br />
Department of Mechano-Informatics<br />
Graduate School of Information Technology and Science<br />
The University of Tokyo Abstract - This talk introduces a history and ongoing activities of the research and development in JSK Robotics Lab, The University of Tokyo including hand-eye coordination in rope handling, correlation-based tracking vision, vision-based robotics, wireless remote-brained approach, whole-body behaviors on humanoids, tactile deformable devices for robot sensor suit, musculoskeletal spined humanoids, power systems for human speed and torque perfomance, learning and assistive activities on HRP2 (Japanese Humanoid Robot Project Platform) and PR2 (Willow Garages's Personal Robot Platform for Open Source Robot Operating System:ROS), common software architecture in all JSK robots, and their mother environment for inherited research and development in JSK.
-
- Date & Time: Wednesday, February 22, 2012; 11:00 AM
Speaker: Dr. Charles Cadieu, McGovern Institute for Brain Research, MIT
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
Abstract - The human visual system processes complex patterns of light into a rich visual representation where the objects and motions of our world are made explicit. This remarkable feat is performed through a hierarchically arranged series of cortical areas. Little is known about the details of the representations in the intermediate visual areas. Therefore, we ask the question: can we predict the detailed structure of the representations we might find in intermediate visual areas?
In pursuit of this question, I will present a model of intermediate-level visual representation that is based on learning invariances from movies of the natural environment and produces predictions about intermediate visual areas. The model is composed of two stages of processing: an early feature representation layer, and a second layer in which invariances are explicitly represented. Invariances are learned as the result of factoring apart the temporally stable and dynamic components embedded in the early feature representation. The structure contained in these components is made explicit in the activities of second-layer units that capture invariances in both form and motion. When trained on natural movies, the first-layer produces a factorization, or separation, of image content into a temporally persistent part representing local edge structure and a dynamic part representing local motion structure. The second-layer units are split into two populations according to the factorization in the first-layer. The form-selective units receive their input from the temporally persistent part (local edge structure) and after training result in a diverse set of higher-order shape features consisting of extended contours, multi-scale edges, textures, and texture boundaries. The motion-selective units receive their input from the dynamic part (local motion structure) and after training result in a representation of image translation over different spatial scales and directions, in addition to more complex deformations. These representations provide a rich description of dynamic natural images, provide testable hypotheses regarding intermediate-level representation in visual cortex, and may be useful representations for artificial visual systems.
-
- Date & Time: Tuesday, February 21, 2012; 12:00 PM
Speaker: Dimitri Androutsos, Richard Rzeszutek, Ryerson University
MERL Host: Anthony Vetro Abstract - The problem of converting monoscopic footage into stereoscopic or multi-view content is inherently difficult and ill-posed. On the surface, this does not appear to be the case as the problem may be summed up as, "Given single-view image or video, create one or more views as if they were taken from a different camera view." However, capturing a three-dimensional scene as a two-dimensional image is a lossy process and any information regarding the distance of objects to the camera is lost. Methods exist for extracting depth information from a monoscopic view and it is possible to obtain metrically-correct depth estimates under certain conditions. But since conversion is primarily used as a post-processing stage in film production, the user requires a degree of control over the results. This, in turn, makes it ill-posed as there is no way to know ahead of time what the user wants from the conversion. In this talk we will present the work being done at Ryerson University on user-guided 2D-to-3D conversion. In particular, we will focus on how existing image segmentation techniques may be combined to produce reasonable depth maps for conversion while still providing complete control to the user. We will also discuss how our research can be applied to both images and video without any significant alterations to our methods.
-
- Date & Time: Wednesday, January 4, 2012; 12:00 PM
Speaker: Dr. Ye Wang, AgaMatrix, Inc. Abstract - In the field of Secure Multi-party Computation, the general objective is to design protocols that allow a group of parties to securely compute functions of their collective private data, while maintaining privacy (in that no parties reveal any more information about their personal data than necessary) and ensuring correctness (in that no parties can disrupt or influence the computation beyond the affect of changing their input data). Information theoretic approaches toward this broad problem, that provide provable (unconditional) security guarantees (even against adversaries that have unbounded computational power), have established that general computation is possible in a variety of scenarios. However, these general solutions are not always the most efficient or finely tuned to the requirements of specific problems and applications.
In this talk, we will overview our work toward the development of efficient information theoretic approaches for secure multi-party computation applications within the common theme of secure computation and inference over a distributed data network. These applications include:
1) private information retrieval, where the objective is to privately obtain data without revealing what was selected;
2) secure statistical analysis, the problem of extracting statistics without revealing anything else about the underlying distributed data;
3) secure sampling, which is the secure distributed generation of new data with a given joint distribution; and
4) secure authentication, where the identity of a party needs to authenticated via inference on his credentials and stored registration data.
Our contributions toward these applications include the following. We proposed a novel oblivious transfer protocol, applicable to private information retrieval, that trades off a small amount privacy for a drastic increase in efficiency. We leveraged a dimensionality reduction that exploits functional structure to simultaneously achieve arbitrarily high accuracy and efficiency in protocols that perform secure statistical analysis of distributed databases. Toward characterizing the region of distributions that can be securely sampled from scratch, we fully characterized the two-party scenario and provided inner and outer bounds on the multi-party scenario. Toward enabling secure distributed authentication, we proposed a two-factor secure biometric authentication system that is robust against the compromise of registered biometric data, allowing for revocability and providing resistance against cross-enrollment attacks.
-
- Date & Time: Tuesday, December 20, 2011; 12:00 PM
Speaker: Olivia Leitermann, MIT
MERL Host: Daniel N. Nikovski
Research Area: Data Analytics
Abstract - Ancillary services such as frequency regulation are required for reliable operation of the electric grid. Currently, the same traditional thermal generators that supply bulk power also perform nearly all frequency regulation. Instead, using high power energy storage resources to provide frequency regulation can allow traditional thermal generators to operate more smoothly. However, using energy storage alone for frequency regulation would require an unreasonably large energy storage capacity. Duration curves for energy capacity and instantaneous ramp rate are used to evaluate the requirements and benefits of using energy storage for a component of frequency regulation. High-pass filtering and closed-loop control are used to separate the portion of a frequency regulation control signal suitable for provision by an energy storage unit from the portion suitable for provision by traditional thermal generating resources. Not all frequency regulation signals are equally amenable to the filtering approach used here. Data from two U.S. control areas are used to demonstrate the techniques and the results are compared.
-
- Date & Time: Thursday, December 1, 2011; 11:00 AM
Speaker: Gregg Favalora, Optics for Hire (OFH)
MERL Host: Matthew Brand Abstract - I'll give an information-rich survey presentation on "interesting and unusual" forms of autostereo display. It will assume basic knowledge of autostereo, e.g. lenticular and parallax barrier displays [unless, of course, you'd like a few minutes going over the basics.] I will discuss: spatially-multiplexed, time-multiplexed, and multi-projector systems. This includes: non-obvious depth cues, advances in parallax barrier displays, lenticulars, multi-projector / projection onto corrugated screens, scanned illumination, volumetric, and electro-holographic techniques.
-
- Date & Time: Friday, November 18, 2011; 12:00 PM
Speaker: Shreeshankar Bodas, MIT Abstract - We look at the problem of designing "efficient" resource allocation algorithms for wireless networks. The volume of data transferred over the wireless network has been ever-growing, but the resources (time, frequency) are not growing at the same rate. We therefore need to design good resource allocation schemes to guarantee a good quality of service to the users.
In the first part of the talk, we look at the wireless access network, such as Wi-Fi. We have three objectives: ensure high resource utilization, low user-perceived latency, while keeping the computational burden on the devices to a minimum. An interesting recent result by Shah et al says that these three objectives are incompatible with other, unless P=NP. We design a physical layer-aware medium access algorithm that simultaneously achieves the three objectives, and thereby show that the hardness result by Shah et al is an artifact of a simplistic view of the physical layer.
The second part of the talk focuses on designing scheduling algorithms for wireless downlink networks, such as a cellular network. Our objectives (again) are high resource utilization, low per-user delay, and a "simple" algorithm. We outline the drawbacks of the classic MaxWeight-type algorithms, and design iterative resource allocation schemes that perform well on all the three fronts.
-
- Date & Time: Thursday, October 20, 2011; 2:20 PM
Speaker: Prof. Mark Plumbley, Queen Mary, London
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Thursday, October 20, 2011; 3:40 PM
Speaker: Prof. Nobutaka Ono, National Institute of Informatics, Tokyo
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Thursday, October 20, 2011; 3:00 PM
Speaker: Dr. Cedric Fevotte, CNRS - Telecom ParisTech, Paris
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Thursday, September 1, 2011; 12:00 PM
Speaker: Alexander Behrens, RWTH Aachen University
MERL Host: Anthony Vetro Abstract - Today, photodynamic diagnostics is commonly used for cancer detection in endoscopic interventions of the urinary bladder. Although the visual contrast between benign and malignant tissue is significantly enhanced using fluorescence markers, the field of view (FOV) of the endoscope becomes very limited. This impedes the navigation and the re-identifying of multi-focal tumors for the physician. Thus, new image mosaicking algorithms and visualization methods, which provide larger FOVs in real-time from free-hand bladder scans are developed and will be presented. Furthermore a novel method for an automatic control of seamless inspections using graphs are addressed. Going beyond image processing, a first low-cost inertial 3-D navigation system will be introduced, and a guided navigation tool for tumor re-identification and its application to virtual endoscopy will be discussed.
-
- Date & Time: Wednesday, June 15, 2011; 12:00 PM
Speaker: Dr. Yue M. Lu, Harvard School of Engineering and Applied Sciences
MERL Host: Petros T. Boufounos Abstract - Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. In this talk, I will present a new digital image sensor that is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity.
To analyze its performance, we formulate the binary sensing scheme as a parameter estimation problem based on quantized Poisson statistics. We show that, with a single-photon quantization threshold and large oversampling factors, the Cramer-Rao lower bound of the estimation variance approaches that of an ideal unquantized sensor, that is, as if there were no quantization in the sensor measurements. Furthermore, this theoretical performance bound is shown to be asymptotically achievable by practical image reconstruction algorithms based on maximum likelihood estimators.
Numerical results on both synthetic data and images taken by a prototype sensor verify the theoretical analysis and the effectiveness of the proposed image reconstruction algorithm. They also demonstrate the benefit of using the new binary sensor in applications involving high dynamic range imaging.
Joint work with Feng Yang, Luciano Sbaiz and Martin Vetterli.
-
- Date & Time: Tuesday, June 14, 2011; 4:00 PM
Speaker: Tadayoshi Aoyama, Nagoya University
Research Area: Computer Vision
Abstract - First, the concept of "Multi-Locomotion Robot" that has multiple types of locomotion is introduced. The robot is developed to achieve a bipedal walk, a quadruped walk and a brachiation, mimicking locomotion ways of a gorilla. It therefore has higher mobility by selecting a proper locomotion type according to its environment and purpose. I show you some experimental videos with respect to realized motions before now.
Second, I focus on biped walk and talk about detail of bipedal walking. This part proposes a 3-D biped walking algorithm based on Passive Dynamic Autonomous Control (PDAC). The robot dynamics is modeled as an autonomous system of a 3-D inverted pendulum by applying the PDAC concept that is based on the assumption of point contact of the robot foot and the virtual constraint as to robot joints. Due to autonomy, there are two conservative quantities named "PDAC constant", that determine the velocity and direction of the biped walking. We also propose the convergence algorithm to make PDAC constants converge to arbitrary values, so that walking velocity and direction are controllable. Finally, experimental results validate the performance and the energy efficiency of the proposed algorithm.
-
- Date & Time: Friday, June 3, 2011; 11:00 AM
Speaker: Prof. Namrata Vaswani, Iowa State University
MERL Host: Petros T. Boufounos Abstract - In this talk, I will discuss our recent work on Recursive Sparse Recovery (RecSparsRec) and show how it provides novel solutions to two very different problems in dynamic imaging. RecSparsRec refers to recursive approaches to causally recover a time sequence of signals/images from a greatly reduced number of measurements (compared to existing approaches), by utilizing their sparsity.
The motivating application for RecSparsRec is fast recursive dynamic magnetic resonance imaging (MRI) for real-time applications like MRI-guided surgery. MRI is a technique for cross-sectional imaging that acquires Fourier projections of the cross-section to be reconstructed, one-at-a-time. Thus, the ability to accurately reconstruct using fewer measurements directly translates into reduced scan times. This, along with online (causal) and fast (recursive) reconstruction algorithms, can enable real-time imaging of fast changing physiological phenomena, and thus make real-time MRI feasible. Cross-sectional images of the brain, heart, or other organs are known to be wavelet sparse. Our recent work was the first to observe that, in a time sequence, their sparsity pattern changes quite slowly. Using this fact, we were able to reformulate the RecSparsRec problem as one of sparse reconstruction with partially known support. We introduced a simple, but very powerful, approach called!
Modified-CS that achieves provably exact reconstruction (in the noise-free case) and whose error is provably stable over time (in the noisy case), with using much fewer measurements than existing work. Our preliminary experiments indicate that Modified-CS needs roughly 5-times fewer measurements than existing MR scanner technology and 1.5-times fewer than existing research literature.
I will briefly also discuss our ongoing work on the difficult video analysis problem of separating foreground moving objects from a background scene that is itself is changing and dong this in real-time. This can be posed as a recursive robust principal components analysis (PCA) problem in the presence of correlated sparse outliers or equivalently, as a problem of recursive sparse recovery in the presence of very large, but ``low rank" noise (noise with a low rank covariance matrix).
-
- Date & Time: Thursday, June 2, 2011; 12:00 PM
Speaker: Ramesh Annavajjala, MERL
MERL Host: Philip V. Orlik Abstract - For orthogonal frequency-division multiplexing (OFDM) based wireless systems, a resource block (RB) in a two-dimensional time-frequency plane is defined as a data block spanned by a number of consecutive OFDM symbols over a number of consecutive subcarriers. Traditionally, RBs contain modulation symbols for data transmission and pilot symbols for channel estimation.
In this talk, I present a novel approach to RB designs for OFDM systems with multiple antennas at the transmitter and the receiver (i.e., MIMO-OFDM). The proposed approach, termed resource block embedding, does not require explicit pilot symbols to estimate the channel at the receiver, and hence reduces the channel estimation overhead significantly. I describe, in detail, the encoding and decoding algorithms for our proposed embedded resource blocks (ERB) for single-user single-antenna transmission, two transmitter antenna Alamouti code, four transmitter antenna stacked Alamouti code, and multi-stream spatial multiplexing. I also outline construction of ERBs for multi-user MIMO systems.
This is a joint work with Phil Orlik and Jin Zhang.
-