Display:

Results

Viewing 1 to 30 of 844
2017-03-28
Technical Paper
2017-01-0045
Guirong Zhuo, Cheng Wu, Fengbo Zhang
Vehicle active collision avoidance includes collision avoidance by braking and by steering, however both of these two methods have their limitations. When the vehicle’s speed is high or road adhesion coefficient is small, critical braking distance is long by braking to avoid collision, and collision avoidance by steering is restricted to the vehicle driving condition on the side lane. Therefore, it is significant to establish the feasible region of active collision avoidance to choose the optimal way to avoid traffic accidents. Model predictive control (MPC), as an optimized method, not only makes the control input of current time to achieve the best, but also can achieve the optimal control input in a future time.
2017-03-28
Technical Paper
2017-01-0040
Michael Hafner, Thomas Pilutti
We propose a steering controller for automated trailer backup, which can be used on tractor-trailer configurations including fifth wheel campers and gooseneck style trailers. The controller steers the trailer based on real-time driver issued trailer curvature commands. We give a stability proof for the hierarchical control system, and demonstrate robustness under a specific set of modeling errors. Simulation results are provided along with experimental data from a test vehicle and 5th wheel trailer.
2017-03-28
Technical Paper
2017-01-1555
Mirosław Jan Gidlewski, Krystof JANKOWSKI, Andrzej MUSZYŃSKI, Dariusz ŻARDECKI
Lane change automation appears to be a fundamental problem of vehicle automated control, especially when the vehicle is driven at high speed. Selected relevant parts of the recent research project are reported in this paper, including literature review, the developed models and control systems, as well as crucial simulation results. In the project, two original models describing the dynamics of the controlled motion of the vehicle were used, verified during the road tests and in the laboratory environment. The first model – fully developed (multi-mass, 3D, nonlinear) – was used in simulations as a virtual plant to be controlled. The second model – a simplified reference model of the lateral dynamics of the vehicle (single-mass, 2D, linearized) – formed the basis for theoretical analysis, including the synthesis of the algorithm for automatic control. That algorithm was based on the optimal control theory.
2017-03-28
Technical Paper
2017-01-0050
Mario Berk, Hans-Martin Kroll, Olaf Schubert, Boris Buschardt, Daniel Straub
With increasing levels of driving automation, the information provided by automotive environment sensors becomes highly safety relevant. A correct assessment of the sensor’s reliability is therefore crucial for ensuring the safety of the customer functions. There are currently no standardized procedures or guidelines for demonstrating the reliability of the sensor information. Engineers are faced with setting up test procedures and estimating efforts. Statistical hypothesis tests are commonly employed in this context. In this contribution, we present an alternative method based on Bayesian parameter inference, which is easy to implement and whose interpretation is more intuitive for engineers without a profound statistical education. It also enables a more realistic representation of dependencies among errors.
2017-03-28
Technical Paper
2017-01-0044
Roman Schmied, Gunda Obereigner, Harald Waschl
In the field of advanced driver assistance systems (ADAS) the capability to accurately estimate and predict the driving behavior of surrounding traffic participants has shown to enable significant improvements of the respective ADAS in terms of economy and comfort. The interaction between the different participants can be an important aspect. One example for this interaction is the car-following behavior in dense urban traffic situations.There are different phenomenological or psychological models of human car following which also consider variations between different participants. Unfortunately, these models can seldom be applied for control directly or prediction in vehicle applications. A different way is to follow a control oriented approach, to model the human as a time delay controller which tracks the inter-vehicle distance. The parameters are typically chosen based on empirical rules and do not consider variations between drivers.
2017-03-28
Technical Paper
2017-01-0071
Vahid Taimouri, Michel Cordonnier, Kyoung Min Lee, Bryan Goodman
While operating a vehicle in either autonomous or occupant piloted mode, an array of sensors can be used to guide the vehicle including stereo cameras. The state-of-the-art distance map estimation algorithms, e.g. stereo matching, usually detect corresponding features in stereo images, and estimate disparities to compute the distance map in a scene. However, depending on the image size, content and quality, the feature extraction process can become inaccurate, unstable and slow. In contrast, we employ deep convolutional neural networks, and propose two architectures to estimate distance maps from stereo images. The first architecture is a simple and generic network that identifies which features to extract, and how to combine them in a multi-resolution framework. The second architecture is a more specialized one that extracts local similarity information from two images, which are used for stereo feature matching, and fuses them at multiple resolutions to generate the distance map.
2017-03-28
Technical Paper
2017-01-0096
Valentin Soloiu, Bernard Ibru, Thomas Beyerl, Tyler Naes, Charvi Popat, Cassandra Sommer, Brittany Williams
An important aspect of an autonomous vehicle system, aside from the crucial features of path following and obstacle detection, is the ability to accurately and effectively recognize visual cues present on the roads, such as traffic lanes, signs and lights. This ability is important because very few vehicles on the road are autonomously driven and must integrate with conventionally operated vehicles. An enhanced infrastructure has yet to be available solely for autonomous vehicles to more easily navigate lanes and intersections non-visually. Recognizing these cues efficiently can be a complicated task as it not only involves constantly gathering visual information from the vehicle’s surroundings but also requires accurate processing. Ambiguity of traffic control signals challenges even the most advanced computer decision making algorithms. The vehicle then must keep a predetermined position within its travel lane based on its interpretation of its surroundings.
2017-03-28
Technical Paper
2017-01-0068
Pablo Sauras-Perez, Andrea Gil, Jasprit Singh Gill, Pierluigi Pisu, Joachim Taiber
In the next 20 years fully autonomous vehicles are expected to be in the market. The advance on their development is creating paradigm shifts on different automotive related research areas. Vehicle interiors design and human vehicle interaction are evolving to enable interaction flexibility inside the cars. However, today’s vehicle manufacturers’ autonomous car concepts maintain the steering wheel as a control element. While this approach allows the driver to take over the vehicle route if needed, it causes a constraint in the previously mentioned interaction flexibility. Other approaches, such as the one proposed by Google, enable interaction flexibility by removing the steering wheel and accelerator and brake pedals. However, this prevents the users to take control over the vehicle route if needed, not allowing them to make on-route spontaneous decisions, such as stopping at a specific point of interest.
2017-03-28
Technical Paper
2017-01-0043
Michael Smart, Satish Vaishnav, Steven Waslander
Robust lane marking detection remains a challenge, particularly in temperate climates where markings degrade rapidly due to winter conditions and snow removal equipment. In previous work on stereo images, dynamic Bayesian networks with heuristic features were used whose distributions are identified using unsupervised expectation maximization, which greatly reduced sensitivity to initialization. This work has been extended in three important respects. The situations where poor RANSAC hypotheses were generated and significantly contributed to false alarms have been corrected. The null hypothesis is reformulated to guarantee that detected hypothesis satisfy a minimum likelihood. The computational requirements have been reduced for tracking and pairing by computing an upper bound on the marginal likelihood of all part hypotheses and rejecting part hypothesis if its upper bound is less likely than the null hypothesis.
2017-03-28
Technical Paper
2017-01-0102
Mahdi Heydari, Feng Dang, Ankit Goila, Yang Wang, Hanlong Yang
In this paper, a sensor fusion approach is introduced to estimate lane departure. The proposed algorithm combines the camera and inertial navigation sensor data with the vehicle dynamics to estimate the vehicle path and the lane departure time. The lane path and vehicle path are estimated by using extended Kalman filters. This algorithm can be used to provide early warning for lane departure in order to increase driving safety. Additionally, the algorithm can be used to reduce the latency of information embedded in the controls, so that the vehicle lateral control performance can be significantly improved during lane keeping in Advanced Driver Assistance Systems (ADAS) or autonomous vehicles. Furthermore, it improves lane detection reliability in situations when camera fails to detect lanes. Several scenarios are simulated in order to show the effectiveness of the proposed algorithm.
2017-03-28
Technical Paper
2017-01-0104
Maryam Moosaei, Yi Zhang, Ashley Micks, Simon Smith, Madeline J. Goh, Vidya Nariyambut Murali
Traffic light detection is critical for safe behavior in a world where technology on vehicles is growing more complex. In this work we outline a deep learning based solution for traffic light detection that leverages virtual data for affordable and efficient supervised learning. Using Unreal Engine, we generated a virtual dataset by moving a virtual camera through a variety of intersection scenes while varying parameters such as lighting, camera position and angle. Using the automatically generated bounding boxes around the illuminated traffic lights themselves, we trained an 8-layer deep neural network (DNN), without pre-training, for classification of traffic light signals (green, amber, red). After training on virtual data, we tested the network on real world data collected from a forward facing camera on a vehicle. Using color space conversion and contour extraction, we identified candidate regions by filtering based on color, shape and size.
2017-03-28
Technical Paper
2017-01-0116
Ankit Goila, Ambarish Desai, Feng Dang, Jian Dong, Rahul Shetty, Rakesh Babu Kailasa, Mahdi Heydari, Yang Wang, Yue Sun, Manikanta Jonnalagadda, Mohammed Alhasan, Hanlong Yang, Katherine R. Lastoskie
ADAS features development involve multidisciplinary technical fields, as well as extensive variety of different sensors and actuators, therefore the early design process requires much more resources and time to collaborate and implement. In this paper, we demonstrate an alternative way of developing ADAS features by using RC car with low cost hobby type of controllers, such as Arduino Due and Raspberry Pi. Camera and one-beam type of Lidar are used together with Raspberry Pi. OpenCV free software is also used for developing lane detection and object recognition. In this paper, we demonstrate the high level concept algorithm architecture, development and potential operation as well as high level testing of various features and functionalities. The developed vehicle can be used as a prototype of the early design phase as well as functional safety testing bench.
2017-03-28
Technical Paper
2017-01-0046
Mohamed Aladem, Samir Rawashdeh, Nathir Rawashdeh
To reliably implement driver-assist features and ultimately, self-driving cars, autonomous driving systems will likely rely on a variety of sensor types including GPS, RADAR, LASER range finders, and cameras. Cameras are an essential sensory component because they lend themselves to the task of identifying object types that a self-driving vehicle is likely to encounter such as pedestrians, cyclists, animals, other cars, or objects on the road. A stereo vision system adds the capability of tracking object locations and trajectories relative to the vehicle. This information can be essential for an autonomous driving control system that aims to avoid collisions and localize itself in the street scene. In this paper, we present a visual odometry algorithm based on a stereo-camera to perform localization relative to the surrounding environment for purposes navigation and hazard avoidance. Using a stereo-camera enhances the accuracy with respect to monocular visual odometry.
2017-03-28
Technical Paper
2017-01-0113
Vaclav Jirovsky
Today's vehicles are being more often equipped with systems, which are autonomously influencing the vehicle behavior. The close future is awaiting more systems of the kind and even significant penetration of fully autonomous vehicles in regular traffic is expected by OEMs in Europe around year 2025. The driving is highly multitasking activity and human errors emerge in situations, when he is not able to process and understand the essential amount of information. Future autonomous systems very often rely on some type of inter-vehicular communication. This shall provide the vehicle with similar or higher amount of information, than driver uses in his decision making process. Therefore, currently used, and debatable, 1-D quantity TTC (time-to-collision) will definitely become inadequate. Regardless the vehicle is driven by human or robot, it’s always necessary to know, whether and which reaction is necessary to perform.
2017-03-28
Technical Paper
2017-01-1672
Siddartha Khastgir, Gunwant Dhadyalla, Stewart Birrell, Sean Redmond, Ross Addinall, Paul Jennings
The advent of Advanced Driver Assistance Systems (ADAS) and autonomous driving has offered a new challenge for functional verification and validation. The explosion of the test sample space for possible combinations of inputs needs to be handled in an intelligent manner to meet cost and time targets for the development of such systems. Various test methods like VEHiL (Vehicle Hardware-in-the-Loop), Vehicle-in-the-Loop and Co-ordinated automated driving have been developed for validation of ADAS and autonomous systems. Increasingly, driving simulators are being used for testing ADAS and autonomous systems as they offer a safer and a more reproducible environment for verifying such systems. While each of these test methods serves a specific purpose, they have a common challenge between them. All of these methods require the generation of test scenarios for which the systems are to be tested.
2017-03-28
Technical Paper
2017-01-0037
Xianyao Ping, Gangfeng Tan, Yahui Wu, Binyu Mei, Yuxin Pang
The heavy-duty vehicles travel with complex driving conditions and long-distance transportation in the mountainous areas. The driver's hysteretic perception to the environment will affect the fuel economy of the vehicle. Unreasonable acceleration and deceleration on the slope will increase the fuel consumption. Improving the performance of the engine and transmission system has limited energy-saving space, and the most fuel-efficient driving assistant systems don't consider the road conditions. In the research, the low space dimensions of the economic driving optimization algorithm with the fast calculation speed is established to plan the accurate and real-time economic driving scheme based on the slope information. The optimization algorithm with less dependence on the experimental data of the fuel consumption characteristics has the good adaptability to most vehicles. For the first drive on the slope, the slope gradient and length are measured and stored.
2017-03-28
Technical Paper
2017-01-0039
Toshiya Hirose, Yasufumi Ohtsuka, Masato Gokan
1. Background Vehicle to vehicle communication system (V2V) can send and receive the vehicle information by wireless communication, and use as a safety driving assist for driver. In particular, it is investigated to clarify appropriate activation timing for assist levels of (a) collision information, (b) collision caution and (c) collision warning. This study focused on the activation timing of collision information, caution and warning with V2V. The experiment carried out with a driving simulator, and this study investigated an effective activation timing for three assist levels. 2. Experimental method The experimental scenario had four situations of (1) “Assist for braking”, (2) “Assist for accelerating”, (3) “Assist for right turn” and (4) “Assist for left turn” in blind intersection. These were set on the basis of data of traffic accidents in Japan. The activation timings of three levels were based on TTI (Time to Intersection) and TTC (Time to Collision).
2017-03-28
Technical Paper
2017-01-1638
Felix Gow, Lifeng Guan, Jooil Park
TPMS sensor measures air pressure and temperature in the tire and transmits tire information as wireless messages to TPMS central unit which consists of RF receiver. TPMS central unit needs to determine the exact sensor locations (e.g. Front Left, Front Right, Rear Left or Rear Right) in order to correctly identify the location of the tire with low pressure. The identified tire with low pressure is displayed on dash board in the car. Thus, determination of the location of a particular tire made automatically by the TPMS system itself or tire auto localization is required. Tire auto localization is implemented in several methods. A new method is proposed in this paper. The proposed method uses at least two RF transceivers as repeaters. Each transceiver receives wireless messages (eg. Pressure, temperature, sensor ID) from the nearest TPMS sensor and transmits them with RF transceiver identity to TPMS central unit.
2017-03-28
Technical Paper
2017-01-0041
Shengguang Xiong, Gangfeng Tan, Xuexun Guo, Longjie Xiao
Automotive Front Lighting System (AFS) can receive the steering signal and the vehicle speed signal to automatically adjust the position of the headlamps light's body. AFS will provide drivers more information of the front road to protect drivers safe when driving at night. AFS works when there is a steering signal input. However, drivers often need the front road's information before they turn the steering wheel when vehicles are going to go round a sharp corner, AFS will not work in such a situation. In order to solve this problem, this paper studied how to foresight the front road and optimize the working time of AFS based on GIS (Geographic Information System) and GPS (Geographic Information System). This paper built the model of the vehicle steering characteristics with the relationship between the headlamp steering lighting and the angle of the steering wheel based on the follow-up steering law of headlamps of AFS.
2017-03-28
Technical Paper
2017-01-0070
Longxiang Guo, Sagar Manglani, Xuehao Li, Yunyi Jia
Autonomous driving technologies can provide better safety, comfort and efficiency for future transportation. Most research in this area main focus on developing sensing and control approaches to achieve autonomous driving functions such as model based approaches and neural network based approaches. However, even if the autonomous driving functions are ideally achieved, the performance of the system is still subject to sensing exceptions. Few research has studied how to efficiently handle such sensing exceptions. In existing autonomous approaches, sensors, such as cameras, radars and lidars, usually need to be full calibrated or trained after mounted on the vehicles and before being used for autonomous driving. A simple unexpected on the sensors, e.g., mounting position or angle of a camera is changed, may lead the autonomous driving function to fail.
2017-03-28
Technical Paper
2017-01-0072
Yang Zheng, Navid Shokouhi, Amardeep Sathyanarayana, John Hansen
The proliferation of smartphone application has made a great impact in the automotive industry. Smartphones contain a variety of useful sensors including cameras, microphones, as well as their Inertial Measurement Units (IMU) such as accelerometer, gyroscope, and GPS. These multi-channel signals would also be synchronized to provide a comprehensive description of driving scenarios. Therefore, the smartphone could potentially be leveraged for in-vehicle data collection, monitoring, and added safety options/feedback strategies. In our previous study, a smartphone/tablet solution with our Android App - MobileUTDrive - was developed. This platform provides a cost effective approach, which allows for a wider range of naturalistic driving study opportunities for drivers operating their own vehicles. The most meaningful reason for introducing the smartphone platform is its potential ability to be integrated with intelligent telematics services.
2017-03-28
Technical Paper
2017-01-0117
Raja Sekhar Dheekonda, Sampad Panda, Md Nazmuzzaman khan, Mohammad Hasan, Sohel Anwar
Accuracy in detecting a moving object is critical to autonomous driving or advanced driver assistance systems. By including the object classification from multiple sensor detections, the model of the object or environment can be identified more accurately. The critical parameters involved in improving the accuracy are the size and speed of the moving object. In a laboratory experiment, we used three different type of sensor, a digital camera with 8 megapixel resolution, a LIDAR with 40m range, and an ultrasonic distance transducer sensor to identify the object in real-time. The moving object that is to be detected was set in motion at different speeds in the transverse direction to the vehicle (sensor). The size of the moving object was also varied. All sensor data were processed on a real-time prototyping microcontroller. All sensor data were used to define a composite object representation so that it could be used for the class information in the core object’s description.
2017-03-28
Technical Paper
2017-01-0038
Corwin Stout, Milos Milacic, Fazal Syed, Ming Kuang
In recent years, we have witnessed increased discrepancy between fuel economy numbers reported in accordance with EPA testing procedures and real world fuel economy reported by drivers. The debates range from needs for new testing procedures to the fact that driver complaints create one-sided distribution; drivers that get better fuel economy do not complain about the fuel economy, but only the ones whose fuel economy falls short of expectations. In this paper, we demonstrate fuel economy improvements that can be obtained if the driver is properly sophisticated in the skill of driving. Implementation of SmartGauge with EcoGuide into the Ford C-MAX Hybrid in 2013 helped drivers improve their fuel economy on hybrid vehicles. Further development of this idea led to the EcoCoach that would be implemented into all future Ford vehicles.
2017-03-28
Technical Paper
2017-01-0035
Binyu Mei, Xuexun Guo, Gangfeng Tan, Yongbing Xu, Mengying Yang
Vehicle speed is an important factor to driving safety, which is directly related to the stability and braking performance of the vehicle. Besides, the precise measurement of the vehicle speed is the basis of some vehicle active safety systems. Even in the future intelligent transportation, high quality speed information will also play an important role. The commonly used vehicle speed measurement new techniques are radar, infrared and ultrasonic. But the low speed performance of radar detection is poor, and infrared is easy affected by environmental factors, and the ultrasonic measurement accuracy is low. Focusing on these issues, image matching technology is been used to measure the vehicle speed in this paper. The image information of the road in front of the vehicle is collected, and the pixel displacement of the vehicle is calculated by the matching system, thus accurately vehicle speed can be obtained.
2017-03-28
Technical Paper
2017-01-1408
Satoshi Kozai, Yoshihiko Takahashi, Akihiro Kida, Takayuki Hiromitsu, Shinji Kitaura, Sadamasa Sawada, Gladys Acervo, Marius Ichim
The goal of both automakers and vehicle users is to minimize the negative impacts of vehicles on society, such as traffic accidents, not only on the road but parking area, optimizing the enjoyment of using a car, comfort, and usability. To realize this, we have already provided automatic brake system (ICS) for static obstacles in parking area. We have also developed the Rear Cross Traffic Auto Brake (RCTAB) system, which detects a vehicle approaching from the sides when backing out of a parking area. We decided RCTAB system specifications based on two information “Approaching vehicle speed in parking area” and “Maximum backing speed”. RCTAB system structure consists of Radar which shared with “Blind Spot Monitor” and ECU which shared with “ICS Computer”. The radar detects the approaching vehicle. The ICS Computer judge Collision prediction and request “Braking Force” and “Driving Force” to Brake and Engine Computer.
2017-03-28
Technical Paper
2017-01-0069
Venkatesh Raman, Mayur Narsude, Damodharan Padmanaban
This paper describes main challenges encountered during data enrichment phase of connected vehicle experiments. It also compares data imputation approaches for data coming from actual driving scenarios and obtained using in-vehicle data acquisition devices. Three distinct window-based approaches were used for cleaning and imputing the missing values in different CAN-bus (Controller Area Network) signals. Lengths of windows used for data imputation for the three approaches were: 1) entire time-course, 2) day, and 3) trip (defined as duration between vehicle engine ON to OFF). An algorithm for identification of engine ON and OFF events will also be presented, in case this signal is not explicitly captured during the data acquisition phase. As a case study, these imputation techniques were applied to the data from vehicle’s CAN information in a driver behavior classification experiment.
2017-03-28
Technical Paper
2017-01-1405
Tzu-Sung Wu
Autonomous Emergency Braking Systems (AEBS) usually contain radar, (stereo) camera and/or LiDAR-based technology to identify potential collision partners ahead of the car, such that to warn the driver or automatically brake to avoid or mitigate a crash. The advantage of camera is less cost: however, is inevitable to face the defects of cameras in AEBS, that is, the image recognition cannot perform good accuracy in the poor or over-exposure light condition. Therefore, the compensation of other sensors is of importance. Motivated by the improvement of false detection, we propose a Pedestrian-and-Vehicle Recognition (PVR) algorithm based on radar to apply to AEBS. The PVR employs the radar cross section (RCS) and standard deviation of width of obstacle to determine whether a threshold value of RCS and standard deviation of width of the pedestrian and vehicle is crossed, and to identity that the objective is a pedestrian or vehicle, respectively.
2017-03-28
Technical Paper
2017-01-0110
Hao Sun, Weiwen Deng, Chen Su, Jian Wu
The ability to recognize traffic vehicles’ lane change maneuver lays the foundation for predicting their long-term trajectories in real-time, which is a key component for Advanced Driver Assistance Systems (ADAS) and autonomous automobiles. Learning-based approach is powerful and efficient, such approach has been used to solve maneuver recognition problems of the ego vehicles on conventional researches. However, since the parameters and driving states of the traffic vehicles are hardly observed by exteroceptive sensors, the performance of traditional methods cannot be guaranteed. In this paper, a novel approach using multi-class probability estimates and Bayesian inference model is proposed for traffic vehicle lane change maneuver recognition. The multi-class recognition problem is first decomposed into three binary problems under error correcting output codes (ECOC) framework.
2017-03-28
Technical Paper
2017-01-0042
David Andrade, Rodrigo Adamshuk, William Omoto, Felipe Franco, João Henrique Neme, Sergio Okida, Angelo Tusset, Rodrigo Amaral, Artur Ventura, Max Mauro Dias Santos
Advanced driver assistance systems (ADAS) are designed to improve driving safety and reduce driving stress in roads. These systems are applied to maintain safe distance from the car in front, alert driver to objects in their path, alert driver of an unintended departure from the lane or even automatic intervention. According to National Highway Traffic Safety Administration (NHTSA), 94 percent of the immediate reason for the critical pre-crash and often the last failure in the causal chain of events leading up to the crash is assigned to the driver. ADAS testing and rating are a development trend in NHTSA’s New Car Assessment Program (NCAP), which increases the manufactures investment in such solutions. Camera based ADAS solutions for Lane Departure Warning (LDW) requires extensive use of mathematical operations in image processing. Edge detection methods are frequently used in such applications, however noise and outlier reduction are still challenging tasks.
2017-03-28
Technical Paper
2017-01-0047
Jie Bai, Sihan CHEN, Hua Cui, Xin Bi, Libo Huang
The radar-based advanced driver assistance systems (ADAS) like autonomous emergency braking (AEB) and forward collision warning (FCW) can reduce accidents, so as to make vehicles, drivers and pedestrians safer. For active safety, automotive millimeter-wave radar is an indispensable role in the automotive environmental sensing system since it can work effectively regardless of the bad weather while the camera fails. One crucial task of the automotive radar is to detect and distinguish some objects close to each other precisely with the increasingly complex of the road condition. Nowadays almost all the automotive radar products work in bidimensional area where just the range and azimuth can be measured. However, sometimes in their field of view it is not easy for them to differentiate some objects, like the car, the manhole covers and the guide board, when they align with each other in vertical direction.
Viewing 1 to 30 of 844

Filter