International Teaching | ARTIFICIAL VISION
International Teaching ARTIFICIAL VISION
Back
Lessons Timetable
cod. 0622700045
ARTIFICIAL VISION
0622700045 | |
DEPARTMENT OF INFORMATION AND ELECTRICAL ENGINEERING AND APPLIED MATHEMATICS | |
EQF7 | |
COMPUTER ENGINEERING | |
2025/2026 |
OBBLIGATORIO | |
YEAR OF COURSE 2 | |
YEAR OF DIDACTIC SYSTEM 2022 | |
AUTUMN SEMESTER |
SSD | CFU | HOURS | ACTIVITY | |
---|---|---|---|---|
ING-INF/05 | 5 | 40 | LESSONS | |
ING-INF/05 | 2 | 16 | EXERCISES | |
ING-INF/05 | 2 | 16 | LAB |
Objectives | |
---|---|
THE COURSE AIMS AT PROVIDING THE COMPETENCES ON THE MAIN METHODOLOGIES AND TECHNIQUES REQUIRED TO REALIZE AN ARTIFICIAL VISION SYSTEM. KNOWLEDGE AND UNDERSTANDING KNOWLEDGE OF THE DIFFERENT TASKS CARRIED OUT WITHIN AN ARTIFICIAL VISION SYSTEM, AND IN PARTICULAR WITH REGARDS TO THE LOW LEVEL PROCESSING PHASES (ACQUISITION, FILTERING), TO THE INTERMEDIATE LEVEL PHASES (REGIONALIZATION AND CONTOURS EXTRACTION) AND TO THE HIGH LEVEL PROCESSING (SHAPE RECOGNITION, TRACKING), AS WELL AS UNDERSTANDING OF THE BASIC TECHNIQUES FOR IMPLEMENTING SUCH FUNCTIONS. APPLYING KNOWLEDGE AND UNDERSTANDING DIMENSION AN IMAGE AND / OR A VIDEO CAPTURE SYSTEM SATISFYING THE REQUIREMENTS. DESIGN AND IMPLEMENT AN ARTIFICIAL VISION SYSTEM FOR THE INTERPRETATION OF IMAGES AND / OR VIDEOS USING FUNCTIONS OF THE OPENCV COMPUTER VISION SOFTWARE LIBRARY INTEGRATING THEM WITH MACHINE LEARNING TECHNIQUES. |
Prerequisites | |
---|---|
IN ORDER TO ACHIEVE THE GOALS OF THE COURSE, IT IS REQUIRED THE KNOWLEDGE OF THE PYTHON PROGRAMMING LANGUAGE AND OF THE MAIN MACHINE LEARNING AND DEEP LEARNING FRAMEWORKS AS PYTORCH. |
Contents | |
---|---|
Didactic Unit 1: Architecture of Artificial Vision Systems (HOURS LECTURE/PRACTICE/LABORATORY 4/0/0) 1 (2 HOURS LECTURE): Introduction to computer vision. 2 (2 HOURS LECTURE): Architecture of a computer vision system and stages of the processing pipeline KNOWLEDGE AND UNDERSTANDING: General architecture of a computer vision system APPLIED KNOWLEDGE AND UNDERSTANDING: Recognize the different components and processing stages of a computer vision system. Didactic Unit 2: Image Acquisition, Representation and Preprocessing (HOURS LECTURE/PRACTICE/LABORATORY 18/14/0) 3 (4 HOURS LECTURE): Architecture of an image acquisition system. Optics: pinhole and thin lens models. Key concepts: focal length, field of view, depth of field, aperture, exposure. 4 (2 HOURS LECTURE): Types of sensors: visible, infrared, thermal. Elements of depth imaging. Event-based or neuromorphic cameras. 5 (2 HOURS LECTURE): Sizing of an image processing system 6 (6 HOURS PRACTICE): Methods and tools for designing a camera acquisition system in a real scenario. Discussion of three examples. 7 (2 HOURS PRACTICE): Introduction to OpenCV. Basic image operations: loading from/saving to file, image/video acquisition and display 8 (2 HOURS LECTURE): Inverse perspective mapping 9 (2 HOURS PRACTICE): Practice on inverse perspective mapping 10 (8 HOURS LECTURE): Thresholding. Low-pass filters. High-pass filters. Morphological operators. Canny edge detection algorithm. Connected component labeling. 11 (4 HOURS PRACTICE): Practice on point and local operators, morphological operators, edge detection, and connected component labeling. KNOWLEDGE AND UNDERSTANDING: Main parameters of an image acquisition system. Main image preprocessing techniques, application areas, advantages and limitations. OpenCV software library for computer vision. APPLIED KNOWLEDGE AND UNDERSTANDING: Identify the image acquisition system based on specifications such as object distance, area coverage, and minimum resolution. Identify the most suitable preprocessing and image segmentation techniques for a specific computer vision problem. Didactic Unit 3: Image and Video Interpretation (HOURS LECTURE/PRACTICE/LABORATORY 22/12/0) 12 (4 HOURS LECTURE): Handcrafted feature extraction with PCA, LBP and HOG 13 (2 HOURS PRACTICE): Practice on handcrafted feature extraction with PCA, LBP and HOG 14 (4 HOURS LECTURE): Object detection using deep learning-based approaches 15 (2 HOURS PRACTICE): Practice with object detector for people detection 16 (4 HOURS LECTURE): Multi-object tracking 17 (2 HOURS PRACTICE): Practice on people tracking 18 (4 HOURS LECTURE): Pedestrian attribute recognition 19 (2 HOURS PRACTICE): Practice on pedestrian attribute recognition 20 (4 HOURS LECTURE): Visual Question Answering on images 21 (2 HOURS PRACTICE): Practice on VQA for images – Pedestrian attribute recognition 22 (2 HOURS LECTURE): Visual Question Answering on video 23 (2 HOURS PRACTICE): Practice on VQA for video – Spatio-temporal video grounding KNOWLEDGE AND UNDERSTANDING: Object detection and recognition using traditional machine learning and deep learning techniques. Image and video analysis with tracking algorithms and multimodal systems. APPLIED KNOWLEDGE AND UNDERSTANDING: Apply traditional and deep learning machine learning techniques for the development of an object detection and recognition system. Apply computer vision techniques, including multimodal image-text or video-text technologies, for video analysis. Didactic Unit 4: PROJECT WORK (HOURS LECTURE/PRACTICE/LABORATORY 0/0/2) 24 (2 HOURS LABORATORY): Presentation of final course project KNOWLEDGE AND UNDERSTANDING: – APPLIED KNOWLEDGE AND UNDERSTANDING: Design and implement a complete computer vision system using modern computer vision technologies. TOTAL HOURS LECTURE/PRACTICE/LABORATORY: 44/26/2 |
Teaching Methods | |
---|---|
THE COURSE CONTAINS THEORETICAL LECTURES, IN-CLASS EXERCITATIONS AND PRACTICAL LABORATORY EXERCITATIONS. DURING THE IN-CLASS EXERCITATIONS THE STUDENTS ARE DIVIDED IN TEAMS AND ARE ASSIGNED SOME PROJECT-WORKS TO BE DEVELOPED ALONG THE DURATION OF THE COURSE. THE PROJECTS INCLUDE ALL THE CONTENTS OF THE COURSE AND IS ESSENTIAL BOTH FOR THE ACQUISITION OF THE RELATIVE ABILITIES AND COMPETENCES, AND FOR DEVELOPING AND REINFORCING THE ABILITY TO WORK IN A TEAM. IN THE LABORATORY EXERCITATIONS THE STUDENTS IMPLEMENT THE ASSIGNED PROJECTS USING THE OPENCV SOFTWARE LIBRARIES. IN ORDER TO PARTICIPATE TO THE FINAL ASSESSMENT AND TO GAIN THE CREDITS CORRESPONDING TO THE COURSE, THE STUDENT MUST HAVE ATTENDED AT LEAST 70% OF THE HOURS OF ASSISTED TEACHING ACTIVITIES. |
Verification of learning | |
---|---|
THE EXAM AIMS AT EVALUATING, AS A WHOLE: THE KNOWLEDGE AND UNDERSTANDING OF THE CONCEPTS PRESENTED IN THE COURSE, THE ABILITY TO APPLY THAT KNOWLEDGE TO SOLVE PROGRAMMING PROBLEMS REQUIRING THE USE OF ARTIFICIAL VISION TECHNIQUES; INDEPENDENCE OF JUDGMENT, COMMUNICATION SKILLS AND THE ABILITY TO LEARN. THE EXAM INCLUDES TWO STEPS: THE FIRST ONE CONSISTS IN AN ORAL EXAMINATIONS AND IN THE DISCUSSION OF MID TERM PROJECTS REALIZED DURING THE COURSES. THE SECOND STEP CONSISTS IS BASED ON THE REALIZATION OF A FINAL TERM PROJECT: THE STUDENTS, PARTITIONED INTO TEAMS, ARE REQUIRED TO REALIZE A SYSTEM, FINALIZED TO A COMPETITION AMONG THE TEAMS, DESIGNING AND METHODOLOGICAL CONTRIBUTIONS OF THE STUDENTS, TOGETHER WITH THE SCORE ACHIEVED DURING THE COMPETITION, ARE CONSIDERED FOR THE EVALUATION. THE AIM IS TO ASSESS THE ACQUIRED KNOWLEDGE AND ABILITY TO UNDERSTANDING, THE ABILITY TO LEARN, THE ABILITY TO APPLY KNOWLEDGE, THE INDEPENDENCE OF JUDGMENT, THE ABILITY TO WORK IN A TEAM. IN THE FINAL EVALUATION, EXPRESSED IN THIRTIETHS, THE EVALUATION OF THE INTERVIEW AND OF THE MID TERM PROJECTS WORK WILL ACCOUNT FOR 40% WHILE THE FINAL TERM PROJECT WILL ACCOUNT FOR 60%. THE CUM LAUDE MAY BE GIVEN TO STUDENTS WHO DEMONSTRATE THAT THEY CAN APPLY THE KNOWLEDGE AUTONOMOUSLY EVEN IN CONTEXTS OTHER THAN THOSE PROPOSED IN THE COURSE. |
Texts | |
---|---|
LECTURE NOTES. SZELISKI. “COMPUTER VISION: ALGORITHMS AND APPLICATIONS”, SPRINGER M. SONKA, V. HLAVAC, R. BOYLE: "IMAGE PROCESSING, ANALYSIS AND MACHINE VISION", CHAPMAN & HALL. THE TEACHING MATERIAL IS AVAILABLE ON THE UNIVERSITY E-LEARNING PLATFORM (HTTP://ELEARNING.UNISA.IT) ACCESSIBLE TO STUDENTS USING THEIR OWN UNIVERSITY CREDENTIALS. |
More Information | |
---|---|
THE COURSE IS HELD IN ENGLISH. |
BETA VERSION Data source ESSE3