Pedro Miraldo
- Phone: 617-621-7536
- Email:
-
Position:
Research / Technical Staff
Senior Principal Research Scientist -
Education:
Ph.D., University of Coimbra, 2013 -
Research Areas:
External Links:
Pedro's Quick Links
-
Biography
Pedro Miraldo held an FCT postdoctoral researcher grant at the Institute for Systems & Robotics and the Department of Electrical & Computer Engineering, IST Instituto Superior Tecnico Lisbon from 2014 to 2018. Then, he joined the Division of Decision and Control Systems at KTH Royal Institute of Technology as a postdoctoral associate from 2018 to 2019. Finally, he returned to IST in 2019 as a second-stage Researcher (comparable to Assistant Research Professor).
-
Recent News & Events
-
NEWS MERL Papers and Workshops at CVPR 2024 Date: June 17, 2024 - June 21, 2024
Where: Seattle, WA
MERL Contacts: Petros T. Boufounos; Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Jonathan Le Roux; Suhas Lohit; Tim K. Marks; Pedro Miraldo; Jing Liu; Kuan-Chuan Peng; Pu (Perry) Wang; Ye Wang; Matthew Brand
Research Areas: Artificial Intelligence, Computational Sensing, Computer Vision, Machine Learning, Speech & AudioBrief- MERL researchers are presenting 5 conference papers, 3 workshop papers, and are co-organizing two workshops at the CVPR 2024 conference, which will be held in Seattle, June 17-21. CVPR is one of the most prestigious and competitive international conferences in computer vision. Details of MERL contributions are provided below.
CVPR Conference Papers:
1. "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models" by H. Ni, B. Egger, S. Lohit, A. Cherian, Y. Wang, T. Koike-Akino, S. X. Huang, and T. K. Marks
This work enables a pretrained text-to-video (T2V) diffusion model to be additionally conditioned on an input image (first video frame), yielding a text+image to video (TI2V) model. Other than using the pretrained T2V model, our method requires no ("zero") training or fine-tuning. The paper uses a "repeat-and-slide" method and diffusion resampling to synthesize videos from a given starting image and text describing the video content.
Paper: https://www.merl.com/publications/TR2024-059
Project page: https://merl.com/research/highlights/TI2V-Zero
2. "Long-Tailed Anomaly Detection with Learnable Class Names" by C.-H. Ho, K.-C. Peng, and N. Vasconcelos
This work aims to identify defects across various classes without relying on hard-coded class names. We introduce the concept of long-tailed anomaly detection, addressing challenges like class imbalance and dataset variability. Our proposed method combines reconstruction and semantic modules, learning pseudo-class names and utilizing a variational autoencoder for feature synthesis to improve performance in long-tailed datasets, outperforming existing methods in experiments.
Paper: https://www.merl.com/publications/TR2024-040
3. "Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling" by X. Liu, Y-W. Tai, C-T. Tang, P. Miraldo, S. Lohit, and M. Chatterjee
This work presents a new strategy for rendering dynamic scenes from novel viewpoints. Our approach is based on stratifying the scene into regions based on the extent of motion of the region, which is automatically determined. Regions with higher motion are permitted a denser spatio-temporal sampling strategy for more faithful rendering of the scene. Additionally, to the best of our knowledge, ours is the first work to enable tracking of objects in the scene from novel views - based on the preferences of a user, provided by a click.
Paper: https://www.merl.com/publications/TR2024-042
4. "SIRA: Scalable Inter-frame Relation and Association for Radar Perception" by R. Yataka, P. Wang, P. T. Boufounos, and R. Takahashi
Overcoming the limitations on radar feature extraction such as low spatial resolution, multipath reflection, and motion blurs, this paper proposes SIRA (Scalable Inter-frame Relation and Association) for scalable radar perception with two designs: 1) extended temporal relation, generalizing the existing temporal relation layer from two frames to multiple inter-frames with temporally regrouped window attention for scalability; and 2) motion consistency track with a pseudo-tracklet generated from observational data for better object association.
Paper: https://www.merl.com/publications/TR2024-041
5. "RILA: Reflective and Imaginative Language Agent for Zero-Shot Semantic Audio-Visual Navigation" by Z. Yang, J. Liu, P. Chen, A. Cherian, T. K. Marks, J. L. Roux, and C. Gan
We leverage Large Language Models (LLM) for zero-shot semantic audio visual navigation. Specifically, by employing multi-modal models to process sensory data, we instruct an LLM-based planner to actively explore the environment by adaptively evaluating and dismissing inaccurate perceptual descriptions.
Paper: https://www.merl.com/publications/TR2024-043
CVPR Workshop Papers:
1. "CoLa-SDF: Controllable Latent StyleSDF for Disentangled 3D Face Generation" by R. Dey, B. Egger, V. Boddeti, Y. Wang, and T. K. Marks
This paper proposes a new method for generating 3D faces and rendering them to images by combining the controllability of nonlinear 3DMMs with the high fidelity of implicit 3D GANs. Inspired by StyleSDF, our model uses a similar architecture but enforces the latent space to match the interpretable and physical parameters of the nonlinear 3D morphable model MOST-GAN.
Paper: https://www.merl.com/publications/TR2024-045
2. “Tracklet-based Explainable Video Anomaly Localization” by A. Singh, M. J. Jones, and E. Learned-Miller
This paper describes a new method for localizing anomalous activity in video of a scene given sample videos of normal activity from the same scene. The method is based on detecting and tracking objects in the scene and estimating high-level attributes of the objects such as their location, size, short-term trajectory and object class. These high-level attributes can then be used to detect unusual activity as well as to provide a human-understandable explanation for what is unusual about the activity.
Paper: https://www.merl.com/publications/TR2024-057
MERL co-organized workshops:
1. "Multimodal Algorithmic Reasoning Workshop" by A. Cherian, K-C. Peng, S. Lohit, M. Chatterjee, H. Zhou, K. Smith, T. K. Marks, J. Mathissen, and J. Tenenbaum
Workshop link: https://marworkshop.github.io/cvpr24/index.html
2. "The 5th Workshop on Fair, Data-Efficient, and Trusted Computer Vision" by K-C. Peng, et al.
Workshop link: https://fadetrcv.github.io/2024/
3. "SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models" by X. Chen, J. Liu, Y. Wang, P. Wang, M. Brand, G. Wang, and T. Koike-Akino
This paper proposes a generalized framework called SuperLoRA that unifies and extends different variants of low-rank adaptation (LoRA). Introducing new options with grouping, folding, shuffling, projection, and tensor decomposition, SuperLoRA offers high flexibility and demonstrates superior performance up to 10-fold gain in parameter efficiency for transfer learning tasks.
Paper: https://www.merl.com/publications/TR2024-062
- MERL researchers are presenting 5 conference papers, 3 workshop papers, and are co-organizing two workshops at the CVPR 2024 conference, which will be held in Seattle, June 17-21. CVPR is one of the most prestigious and competitive international conferences in computer vision. Details of MERL contributions are provided below.
-
NEWS MERL researchers presenting four papers and organizing the VLAR-SMART101 Workshop at ICCV 2023 Date: October 2, 2023 - October 6, 2023
Where: Paris/France
MERL Contacts: Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Suhas Lohit; Tim K. Marks; Pedro Miraldo; Kuan-Chuan Peng; Ye Wang
Research Areas: Artificial Intelligence, Computer Vision, Machine LearningBrief- MERL researchers are presenting 4 papers and organizing the VLAR-SMART-101 workshop at the ICCV 2023 conference, which will be held in Paris, France October 2-6. ICCV is one of the most prestigious and competitive international conferences in computer vision. Details are provided below.
1. Conference paper: “Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis,” by Nithin Gopalakrishnan Nair, Anoop Cherian, Suhas Lohit, Ye Wang, Toshiaki Koike-Akino, Vishal Patel, and Tim K. Marks
Conditional generative models typically demand large annotated training sets to achieve high-quality synthesis. As a result, there has been significant interest in plug-and-play generation, i.e., using a pre-defined model to guide the generative process. In this paper, we introduce Steered Diffusion, a generalized framework for fine-grained photorealistic zero-shot conditional image generation using a diffusion model trained for unconditional generation. The key idea is to steer the image generation of the diffusion model during inference via designing a loss using a pre-trained inverse model that characterizes the conditional task. Our model shows clear qualitative and quantitative improvements over state-of-the-art diffusion-based plug-and-play models, while adding negligible computational cost.
2. Conference paper: "BANSAC: A dynamic BAyesian Network for adaptive SAmple Consensus," by Valter Piedade and Pedro Miraldo
We derive a dynamic Bayesian network that updates individual data points' inlier scores while iterating RANSAC. At each iteration, we apply weighted sampling using the updated scores. Our method works with or without prior data point scorings. In addition, we use the updated inlier/outlier scoring for deriving a new stopping criterion for the RANSAC loop. Our method outperforms the baselines in accuracy while needing less computational time.
3. Conference paper: "Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes," by Fabien Delattre, David Dirnfeld, Phat Nguyen, Stephen Scarano, Michael J. Jones, Pedro Miraldo, and Erik Learned-Miller
We present a novel approach to estimating camera rotation in crowded, real-world scenes captured using a handheld monocular video camera. Our method uses a novel generalization of the Hough transform on SO3 to efficiently find the camera rotation most compatible with the optical flow. Because the setting is not addressed well by other data sets, we provide a new dataset and benchmark, with high-accuracy and rigorously annotated ground truth on 17 video sequences. Our method is more accurate by almost 40 percent than the next best method.
4. Workshop paper: "Tensor Factorization for Leveraging Cross-Modal Knowledge in Data-Constrained Infrared Object Detection" by Manish Sharma*, Moitreya Chatterjee*, Kuan-Chuan Peng, Suhas Lohit, and Michael Jones
While state-of-the-art object detection methods for RGB images have reached some level of maturity, the same is not true for Infrared (IR) images. The primary bottleneck towards bridging this gap is the lack of sufficient labeled training data in the IR images. Towards addressing this issue, we present TensorFact, a novel tensor decomposition method which splits the convolution kernels of a CNN into low-rank factor matrices with fewer parameters. This compressed network is first pre-trained on RGB images and then augmented with only a few parameters. This augmented network is then trained on IR images, while freezing the weights trained on RGB. This prevents it from over-fitting, allowing it to generalize better. Experiments show that our method outperforms state-of-the-art.
5. “Vision-and-Language Algorithmic Reasoning (VLAR) Workshop and SMART-101 Challenge” by Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Tim K. Marks, Ram Ramrakhya, Honglu Zhou, Kevin A. Smith, Joanna Matthiesen, and Joshua B. Tenenbaum
MERL researchers along with researchers from MIT, GeorgiaTech, Math Kangaroo USA, and Rutgers University are jointly organizing a workshop on vision-and-language algorithmic reasoning at ICCV 2023 and conducting a challenge based on the SMART-101 puzzles described in the paper: Are Deep Neural Networks SMARTer than Second Graders?. A focus of this workshop is to bring together outstanding faculty/researchers working at the intersections of vision, language, and cognition to provide their opinions on the recent breakthroughs in large language models and artificial general intelligence, as well as showcase their cutting edge research that could inspire the audience to search for the missing pieces in our quest towards solving the puzzle of artificial intelligence.
Workshop link: https://wvlar.github.io/iccv23/
- MERL researchers are presenting 4 papers and organizing the VLAR-SMART-101 workshop at the ICCV 2023 conference, which will be held in Paris, France October 2-6. ICCV is one of the most prestigious and competitive international conferences in computer vision. Details are provided below.
See All News & Events for Pedro -
-
Research Highlights
-
Internships with Pedro
-
CV0063: Internship - Visual Simultaneous Localization and Mapping
MERL is looking for a self-motivated graduate student to work on Visual Simultaneous Localization and Mapping (V-SLAM). Based on the candidate’s interests, the intern can work on a variety of topics such as (but not limited to): camera pose estimation, feature detection and matching, visual-LiDAR data fusion, pose-graph optimization, loop closure detection, and image-based camera relocalization. The ideal candidate would be a PhD student with a strong background in 3D computer vision and good programming skills in C/C++ and/or Python. The candidate must have published at least one paper in a top-tier computer vision, machine learning, or robotics venue, such as CVPR, ECCV, ICCV, NeurIPS, ICRA, or IROS. The intern will collaborate with MERL researchers to derive and implement new algorithms for V-SLAM, conduct experiments, and report findings. A submission to a top-tier conference is expected. The duration of the internship and start date are flexible.
Required Specific Experience
- Experience with 3D Computer Vision and Simultaneous Localization & Mapping.
-
CV0064: Internship - Robust Estimation for Computer Vision
MERL is looking for a self-motivated graduate student to work on robust estimation in Computer Vision. Based on the candidate’s interests, the intern can work on a variety of topics such as (but not limited to) camera pose estimation, 3D registration, camera calibration, pose-graph optimization, and transformation averaging. The ideal candidate would be a PhD student with a strong background in 3D computer vision, RANSAC, and graduated non-convexity algorithms, and good programming skills in C/C++ and/or Python. The candidate must have published at least one paper in a top-tier computer vision, machine learning, or robotics venue, such as CVPR, ECCV, ICCV, NeurIPS, ICRA, or IROS. The intern will collaborate with MERL researchers to derive and implement new algorithms for V-SLAM, conduct experiments, and report findings. A submission to a top-tier conference is expected. The duration of the internship and start date are flexible.
Required Specific Experience
- Experience with 3D computer vision, RANSAC, or graduated non-convexity algorithms for computer vision.
-
-
MERL Publications
- "A Probability-guided Sampler for Neural Implicit Surface Rendering", European Conference on Computer Vision (ECCV), September 2024.BibTeX TR2024-129 PDF
- @inproceedings{Pais2024sep,
- author = {Pais, Goncalo and Piedade, Valter and Chatterjee, Moitreya and Greiff, Marcus and Miraldo, Pedro}},
- title = {A Probability-guided Sampler for Neural Implicit Surface Rendering},
- booktitle = {European Conference on Computer Vision (ECCV)},
- year = 2024,
- month = sep,
- url = {https://www.merl.com/publications/TR2024-129}
- }
, - "Multi-Agent Formation Control using Epipolar Constraints", IEEE Robotics and Automation Letters, DOI: 10.1109/LRA.2024.3444690, Vol. 9, No. 12, pp. 11002-11009, September 2024.BibTeX TR2024-147 PDF
- @article{Roque2024sep,
- author = {Roque, Pedro and Miraldo, Pedro and Dimarogonas, Dimos}},
- title = {Multi-Agent Formation Control using Epipolar Constraints},
- journal = {IEEE Robotics and Automation Letters},
- year = 2024,
- volume = 9,
- number = 12,
- pages = {11002--11009},
- month = sep,
- doi = {10.1109/LRA.2024.3444690},
- issn = {2377-3766},
- url = {https://www.merl.com/publications/TR2024-147}
- }
, - "Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), May 2024, pp. 19667-19679.BibTeX TR2024-042 PDF Videos Software
- @inproceedings{Liu2024may,
- author = {Liu, Xinhang and Tai, Yu-wing and Tang, Chi-Keung and Miraldo, Pedro and Lohit, Suhas and Chatterjee, Moitreya},
- title = {Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling},
- booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year = 2024,
- pages = {19667--19679},
- month = may,
- publisher = {IEEE},
- url = {https://www.merl.com/publications/TR2024-042}
- }
, - "Oriented-grid Encoder for 3D Implicit Representations", International Conference on 3D Vision (3DV), DOI: 10.1109/3DV62453.2024.00101, March 2024, pp. 1208-1218.BibTeX TR2024-031 PDF
- @inproceedings{Gaur2024mar,
- author = {Gaur, Arihant and Pais, Goncalo and Miraldo, Pedro},
- title = {Oriented-grid Encoder for 3D Implicit Representations},
- booktitle = {International Conference on 3D Vision (3DV)},
- year = 2024,
- pages = {1208--1218},
- month = mar,
- publisher = {IEEE},
- doi = {10.1109/3DV62453.2024.00101},
- issn = {2475-7888},
- isbn = {979-8-3503-6245-9},
- url = {https://www.merl.com/publications/TR2024-031}
- }
, - "Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes", IEEE International Conference on Computer Vision (ICCV), DOI: 10.1109/ICCV51070.2023.00894, October 2023, pp. 3715-3724.BibTeX TR2023-123 PDF Video Software
- @inproceedings{Delattre2023oct,
- author = {Delattre, Fabien and Dirnfeld, David and Nguyen, Phat and Scarano, Stephen and Jones, Michael J. and Miraldo, Pedro and Learned-Miller, Erik},
- title = {Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes},
- booktitle = {IEEE International Conference on Computer Vision (ICCV)},
- year = 2023,
- pages = {3715--3724},
- month = oct,
- publisher = {IEEE/CVF},
- doi = {10.1109/ICCV51070.2023.00894},
- issn = {2380-7504},
- isbn = {979-8-3503-0718-4},
- url = {https://www.merl.com/publications/TR2023-123}
- }
,
- "A Probability-guided Sampler for Neural Implicit Surface Rendering", European Conference on Computer Vision (ECCV), September 2024.
-
Other Publications
- "On Incremental Structure-from-Motion using Lines", IEEE Trans. Robotics (T-RO), Vol. 38, No. 1, pp. 391-406, 2022.BibTeX
- @Article{j32,
- author = {Mateus, Andr\'e and Tahri, Omar and Aguiar, A. Pedro and Lima, Pedro U. and Pedro Miraldo},
- title = {On Incremental Structure-from-Motion using Lines},
- journal = {IEEE Trans. Robotics (T-RO)},
- year = 2022,
- volume = 38,
- number = 1,
- pages = {391--406},
- note = {[\href{https://arxiv.org/abs/2105.11196}{arXiv:2105.11196}, \href{https://doi.org/10.1109/TRO.2021.3085487}{doi}]}
- }
, - "Solving the discrete Euler-Arnold equations for the generalized rigid body motion", Journal of Computational and Applied Mathematics (CAM), Vol. 402, pp. 113814, 2022.BibTeX
- @Article{j33,
- author = {Cardoso, Jo{\~a}o R. and Pedro Miraldo},
- title = {Solving the discrete Euler-Arnold equations for the generalized rigid body motion},
- journal = {Journal of Computational and Applied Mathematics (CAM)},
- year = 2022,
- volume = 402,
- pages = 113814,
- note = {[\href{https://arxiv.org/abs/2109.00505}{\it arXiv:2109.00505}, \href{https://doi.org/10.1016/j.cam.2021.113814}{doi}]}
- }
, - "An observer cascade for velocity and multiple line estimation", IEEE Int'l Conf. Robotics and Automation (ICRA), 2022.BibTeX
- @Inproceedings{j34,
- author = {Mateus, Andr\'e and Lima, Pedro U. and Pedro Miraldo},
- title = {An observer cascade for velocity and multiple line estimation},
- booktitle = {IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2022,
- note = {[\href{https://arxiv.org/abs/2203.01879}{arXiv:2203.01879},{doi}]}
- }
, - "Active Depth Estimation: Stability Analysis and its Applications", IEEE Int'l Conf. Robotics and Automation (ICRA), 2020, pp. 2002-2008.BibTeX
- @Inproceedings{j26,
- author = {Rodrigues, R. T. and P. Miraldo and Dimarogonas, D. V. and Aguiar, A. P.},
- title = {Active Depth Estimation: Stability Analysis and its Applications},
- booktitle = {IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2020,
- pages = {2002--2008},
- note = {[\href{https://arxiv.org/abs/2003.07137}{\it arXiv:2003.07137},\href{https://doi.org/10.1109/ICRA40945.2020.9196670}{doi}]}
- }
, - "3DRegNet: A Deep Neural Network for 3D Point Registration", IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2020, pp. 7191-7201.BibTeX
- @Inproceedings{j27,
- author = {Pais, G. Dias and Ramalingam, Srikumar and Govindu, Venu Madhav and Nascimento, Jacinto C. and Chellappa, Rama and Pedro Miraldo},
- title = {3DRegNet: A Deep Neural Network for 3D Point Registration},
- booktitle = {IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR)},
- year = 2020,
- pages = {7191--7201},
- note = {[\href{https://arxiv.org/abs/1904.01701}{\it arXiv:1904.01701},\href{https://doi.org/10.1109/CVPR42600.2020.00722}{doi}]}
- }
, - "Minimal Solvers for 3D Scan Alignment with Pairs of Intersecting Lines", IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2020, pp. 7232-7242.BibTeX
- @Inproceedings{j28,
- author = {Mateus, Andr{\'e} and Ramalingam, Srikumar and Pedro Miraldo},
- title = {Minimal Solvers for 3D Scan Alignment with Pairs of Intersecting Lines},
- booktitle = {IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR)},
- year = 2020,
- pages = {7232--7242},
- note = {[\href{https://doi.org/10.1109/CVPR42600.2020.00726}{doi}]}
- }
, - "On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications", Journal of Mathematical Imaging and Vision, Vol. 62, pp. 1107-1120, 2020.BibTeX
- @Article{j29,
- author = {Pedro Miraldo and Cardoso, Jo{\~a}o R.},
- title = {On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications},
- journal = {Journal of Mathematical Imaging and Vision},
- year = 2020,
- volume = 62,
- pages = {1107--1120},
- note = {[\href{https://arxiv.org/abs/1709.06328}{\it arXiv:1709.06328}, \href{https://doi.org/10.1007/s10851-020-00961-w}{doi}]}
- }
, - "Fast Model Predictive Image-Based Visual Servoing for Quadrotors", IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), 2020, pp. 7566-7572.BibTeX
- @Inproceedings{j30,
- author = {Roque, Pedro and Bin, Elisa and Pedro Miraldo and Dimarogonas, Dimos V.},
- title = {Fast Model Predictive Image-Based Visual Servoing for Quadrotors},
- booktitle = {IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS)},
- year = 2020,
- pages = {7566--7572},
- note = {[\href{https://doi.org/10.1109/IROS45743.2020.9340759}{doi}]}
- }
, - "Mapping of Sparse 3D Data using Alternating Projection", Asian Conf. Computer Vision (ACCV), 2020, pp. 295-313.BibTeX
- @Inproceedings{j31,
- author = {Ranade, Siddhant and Xin, Yu and Kakkar, Shantnu and Pedro Miraldo and Ramalingam, Srikumar},
- title = {Mapping of Sparse 3D Data using Alternating Projection},
- booktitle = {Asian Conf. Computer Vision (ACCV)},
- year = 2020,
- pages = {295--313},
- note = {[\href{https://arxiv.org/abs/2010.02516}{\it arXiv:2010.02516},\href{https://doi.org/10.1007/978-3-030-69525-5_18}{doi}]}
- }
, - "POSEAMM: A Unified Framework for Solving Pose Problems using an Alternating Minimization Method", IEEE Int'l Conf. Robotics and Automation (ICRA), 2019, pp. 3493-3499.BibTeX
- @Inproceedings{j21,
- author = {Campos, J. and Rodrigues, J. R. and P. Miraldo},
- title = {POSEAMM: A Unified Framework for Solving Pose Problems using an Alternating Minimization Method},
- booktitle = {IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2019,
- pages = {3493--3499},
- note = {[\href{https://arxiv.org/abs/1904.04858}{\it arXiv:1904.04858}, \href{https://doi.org/10.1109/ICRA.2019.8793694}{doi}]}
- }
, - "OmniDRL: Robust Pedestrian Detection using Deep Reinforcement Learning on Omnidirectional Cameras", IEEE Int'l Conf. Robotics and Automation (ICRA), 2019, pp. 4782-4789.BibTeX
- @Inproceedings{j22,
- author = {Pais, G. and Nascimento, J. C. and P. Miraldo},
- title = {OmniDRL: Robust Pedestrian Detection using Deep Reinforcement Learning on Omnidirectional Cameras},
- booktitle = {IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2019,
- pages = {4782--4789},
- note = {[\href{https://arxiv.org/abs/1903.00676}{\it arXiv:1903.00676}, \href{https://doi.org/10.1109/ICRA.2019.8794471}{doi}]}
- }
, - "Minimal Solvers for Mini-Loop Closures in 3D Multi-Scan Alignment", IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9691-9700.BibTeX
- @Inproceedings{j23,
- author = {P. Miraldo and Saha, S. and Ramalingam, S.},
- title = {Minimal Solvers for Mini-Loop Closures in 3D Multi-Scan Alignment},
- booktitle = {IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR)},
- year = 2019,
- pages = {9691--9700},
- note = {[\href{https://arxiv.org/abs/1904.03941}{\it arXiv:1904.03941}, \href{https://doi.org/10.1109/CVPR.2019.00993}{doi}]}
- }
, - "A Framework for Depth Estimation and Relative Localization of Ground Robots using Computer Vision", IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), 2019, pp. 3719-3724.BibTeX
- @Inproceedings{j24,
- author = {Rodrigues, R. and P. Miraldo and Dimarogonas, D. V. and Aguiar, A. P.},
- title = {A Framework for Depth Estimation and Relative Localization of Ground Robots using Computer Vision},
- booktitle = {IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS)},
- year = 2019,
- pages = {3719--3724},
- note = {[\href{https://arxiv.org/abs/1908.00309}{\it arXiv:1908.00309}, \href{https://doi.org/10.1109/IROS40897.2019.8968459}{doi}]}
- }
, - "SocRob$@$Home Integrating AI Components in a Domestic Robot System", Künstliche Intelligenz (KI), Vol. 33, No. 4, pp. 343-356, 2019.BibTeX
- @Article{j25,
- author = {Lima, P. U. and Azevedo, C. and Brzozowska, E. and Cartucho, J. and Dias, T. J. and Gon\c{c}alves, J. and Kinarullathil, M. and Lawless, G. and Lima, O. and Luz, R. and P. Miraldo and Piazza, E. and Silva, M. and Veiga, T. and Ventura, R.},
- title = {SocRob$@$Home Integrating AI Components in a Domestic Robot System},
- journal = {K\"{u}nstliche Intelligenz (KI)},
- year = 2019,
- volume = 33,
- number = 4,
- pages = {343--356},
- note = {[\href{https://doi.org/10.1007/s13218-019-00618-w}{doi}]}
- }
, - "Generic distortion model for metrology under optical microscopes", Optics and Lasers in Engineering (OLEN), Vol. 103, pp. 119-126, 2018.BibTeX
- @Article{j15,
- author = {Liu, X. and Li, Z. and Zhong, K. and Chao, Y. and P. Miraldo and Shi, Y.},
- title = {Generic distortion model for metrology under optical microscopes},
- journal = {Optics and Lasers in Engineering (OLEN)},
- year = 2018,
- volume = 103,
- pages = {119--126},
- note = {[\href{https://doi.org/10.1016/j.optlaseng.2017.12.006}{doi}]}
- }
, - "Low-level Active Visual Navigation: Increasing robustness of vision-based localization using potential fields", IEEE Robotics and Automation Letters (RA-L) and IEEE Int'l Conf. Robotics and Automation (ICRA), Vol. 3, No. 3, pp. 2079-2086, 2018.BibTeX
- @Article{j16,
- author = {Rodrigues, R. and Basiri, M. and Aguiar, A. P. and P. Miraldo},
- title = {Low-level Active Visual Navigation: Increasing robustness of vision-based localization using potential fields},
- journal = {IEEE Robotics and Automation Letters (RA-L) and IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2018,
- volume = 3,
- number = 3,
- pages = {2079--2086},
- note = {[\href{https://arxiv.org/abs/1801.07249}{\it arXiv:1801.07249}, \href{https://doi.org/10.1109/LRA.2018.2809628}{doi}]}
- }
, - "Analytical Modeling of Vanishing Points and Curves in Catadioptric Cameras", IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2012-2021.BibTeX
- @Inproceedings{j17,
- author = {P. Miraldo and Eiras, F. and Ramalingam, S.},
- title = {Analytical Modeling of Vanishing Points and Curves in Catadioptric Cameras},
- booktitle = {IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR)},
- year = 2018,
- pages = {2012--2021},
- note = {[\href{https://arxiv.org/abs/1804.09460}{\it arXiv:1804.09460}, \href{https://doi.org/10.1109/CVPR.2018.00215}{doi}]}
- }
, - "Active Structure-from-Motion for 3D Straight Lines", IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), 2018, pp. 5819-5825.BibTeX
- @Inproceedings{j18,
- author = {Mateus, A. and Tahri, O. and P. Miraldo},
- title = {Active Structure-from-Motion for 3D Straight Lines},
- booktitle = {IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS)},
- year = 2018,
- pages = {5819--5825},
- note = {[\href{https://arxiv.org/abs/1807.00753}{\it arXiv:1807.00753}, \href{https://doi.org/10.1109/IROS.2018.8593793}{doi}]}
- }
, - "A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines", European Conf. Computer Vision (ECCV), 2018, pp. 490-507.BibTeX
- @Inproceedings{j19,
- author = {P. Miraldo and Dias, T. and Ramalingam, S.},
- title = {A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines},
- booktitle = {European Conf. Computer Vision (ECCV)},
- year = 2018,
- pages = {490--507},
- note = {[\href{https://arxiv.org/abs/1807.09970}{\it arXiv:1807.09970}, \href{https://doi.org/10.1007/978-3-030-01270-0_29}{doi}]}
- }
, - "Efficient and Robust Pedestrian Detection using Deep Learning for Human-Aware Navigation", Robotics and Autonomous Systems (RAS), Vol. 113, pp. 23-37, 2018.BibTeX
- @Article{j20,
- author = {Mateus, A. and Ribeiro, D. and P. Miraldo and Nascimento, J. C.},
- title = {Efficient and Robust Pedestrian Detection using Deep Learning for Human-Aware Navigation},
- journal = {Robotics and Autonomous Systems (RAS)},
- year = 2018,
- volume = 113,
- pages = {23--37},
- note = {[\href{https://arxiv.org/abs/1607.04441}{\it arXiv:1607.04441}, \href{https://doi.org/10.1016/j.robot.2018.12.007}{doi}]}
- }
, - "A framework to calibrate the scanning electron microscope under any magnifications", IEEE Photonics Technology Letters (PT-L), Vol. 28, No. 16, pp. 1715-1718, 2016.BibTeX
- @Article{j12,
- author = {Liu, X. and Li, Z. and P. Miraldo and Zhong, K. and Shi, Y.},
- title = {A framework to calibrate the scanning electron microscope under any magnifications},
- journal = {IEEE Photonics Technology Letters (PT-L)},
- year = 2016,
- volume = 28,
- number = 16,
- pages = {1715--1718},
- note = {[\href{https://doi.org/10.1109/LPT.2016.2522758}{doi}]}
- }
, - "Efficient Object Search for Mobile Robots in Dynamic Environments: Semantic Map as an Input for the Decision Maker", IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), 2016, pp. 2745-2750.BibTeX
- @Inproceedings{j13,
- author = {Veiga, T. and P. Miraldo and Ventura, R. and Lima, P.},
- title = {Efficient Object Search for Mobile Robots in Dynamic Environments: Semantic Map as an Input for the Decision Maker},
- booktitle = {IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS)},
- year = 2016,
- pages = {2745--2750},
- note = {[\href{https://doi.org/10.1109/IROS.2016.7759426}{doi}]}
- }
, - "Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment", IEEE Robotics Automation Magazine (RA-M), Vol. 22, No. 3, pp. 53-61, 2015.BibTeX
- @Article{j10,
- author = {Amigoni, F. and Berghofer, J. and Bonarini, A. and G. Fontana, N. Hochgeschwender and Iocchi, L. and Kraetzschmar, G. K. and Lima, P. and Matteucci, M. and P. Miraldo and Nardi, D. and Schiaonati, V.},
- title = {Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment},
- journal = {IEEE Robotics Automation Magazine (RA-M)},
- year = 2015,
- volume = 22,
- number = 3,
- pages = {53--61},
- note = {[\href{https://doi.org/10.1109/MRA.2015.2448871}{doi}]}
- }
, - "Direct Solution to the Minimal Generalized Pose", IEEE Trans. Cybernetics (T-CYB), Vol. 45, No. 3, pp. 404-415, 2015.BibTeX
- @Article{j5,
- author = {P. Miraldo and Araujo, H.},
- title = {Direct Solution to the Minimal Generalized Pose},
- journal = {IEEE Trans. Cybernetics (T-CYB)},
- year = 2015,
- volume = 45,
- number = 3,
- pages = {404--415},
- note = {[\href{https://doi.org/10.1109/TCYB.2014.2326970}{doi}]}
- }
, - "Pose Estimation for General Cameras Using Lines", IEEE Trans. Cybernetics (T-CYB), Vol. 45, No. 10, pp. 2156-2164, 2015.BibTeX
- @Article{j6,
- author = {P. Miraldo and Araujo, H. and Gon\c{c}alves, N.},
- title = {Pose Estimation for General Cameras Using Lines},
- journal = {IEEE Trans. Cybernetics (T-CYB)},
- year = 2015,
- volume = 45,
- number = 10,
- pages = {2156--2164},
- note = {[\href{https://doi.org/10.1109/TCYB.2014.2366378}{doi}]}
- }
, - "Generalized Essential Matrix: Properties of the Singular Value Decomposition", Image and Vision Computing (IVC), Vol. 34, pp. 45-50, 2015.BibTeX
- @Article{j7,
- author = {P. Miraldo and Araujo, H.},
- title = {Generalized Essential Matrix: Properties of the Singular Value Decomposition},
- journal = {Image and Vision Computing (IVC)},
- year = 2015,
- volume = 34,
- pages = {45--50},
- note = {[\href{https://doi.org/10.1016/j.imavis.2014.11.003}{doi}]}
- }
, - "Pose Estimation for Non-Central Cameras Using Planes", Springer J. Intelligent & Robotic Systems (JINT), Vol. 80, No. 3, pp. 595-608, 2015.BibTeX
- @Article{j8,
- author = {P. Miraldo and Araujo, H.},
- title = {Pose Estimation for Non-Central Cameras Using Planes},
- journal = {Springer J. Intelligent \& Robotic Systems (JINT)},
- year = 2015,
- volume = 80,
- number = 3,
- pages = {595--608},
- note = {[\href{https://doi.org/10.1007/s10846-015-0193-3}{doi}]}
- }
, - "A Simple and Robust Solution to the Minimal General Pose Estimation", IEEE Int'l Conf. Robotics and Automation (ICRA), 2014, pp. 2119-2125.BibTeX
- @Inproceedings{j3,
- author = {P. Miraldo and Araujo, H.},
- title = {A Simple and Robust Solution to the Minimal General Pose Estimation},
- booktitle = {IEEE Int'l Conf. Robotics and Automation (ICRA)},
- year = 2014,
- pages = {2119--2125},
- note = {[\href{https://doi.org/10.1109/ICRA.2014.6907150}{doi}]}
- }
, - "Planar Pose Estimation for General Cameras using Known 3D Lines", IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), 2014, pp. 4234-4240.BibTeX
- @Inproceedings{j4,
- author = {P. Miraldo and Araujo, H.},
- title = {Planar Pose Estimation for General Cameras using Known 3D Lines},
- booktitle = {IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS)},
- year = 2014,
- pages = {4234--4240},
- note = {[\href{https://doi.org/10.1109/IROS.2014.6943159}{doi}]}
- }
, - "Calibration of Smooth Camera Models", IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), Vol. 35, No. 9, pp. 2091-2103, 2013.BibTeX
- @Article{j2,
- author = {P. Miraldo and Araujo, H.},
- title = {Calibration of Smooth Camera Models},
- journal = {IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI)},
- year = 2013,
- volume = 35,
- number = 9,
- pages = {2091--2103},
- note = {[\href{https://doi.org/10.1109/TPAMI.2012.258}{doi}]}
- }
, - "Point-based Calibration Using a Parametric Representation of General Imaging Models", IEEE/CVF Int'l Conf. Computer Vision (ICCV), 2011, pp. 2304-2311.BibTeX
- @Inproceedings{j1,
- author = {P. Miraldo and Araujo, H. and Queir\'{o}, J.},
- title = {Point-based Calibration Using a Parametric Representation of General Imaging Models},
- booktitle = {IEEE/CVF Int'l Conf. Computer Vision (ICCV)},
- year = 2011,
- pages = {2304--2311},
- note = {[\href{https://doi.org/10.1109/ICCV.2011.6126511}{doi}]}
- }
,
- "On Incremental Structure-from-Motion using Lines", IEEE Trans. Robotics (T-RO), Vol. 38, No. 1, pp. 391-406, 2022.
-
Software & Data Downloads
-
Videos