CROSS-REFERENCE TO RELATED PATENT APPLICATION
This application claims the benefit of and priority to (i) U.S. Provisional Application No. 63/452,218, filed Mar. 15, 2023, (ii) U.S. Provisional Application No. 63/539,920, filed Sep. 22, 2023, and (iii) U.S. Provisional Application No. 63/587,611, filed Oct. 3, 2023, the disclosures of each of which are incorporated by reference herein in their entireties.
FIELD
The present disclosure relates to vision systems. More specifically, the present disclosure relates to vision systems for drive throughs.
BACKGROUND
Drive through systems may have multiple lanes with a combination of display interfaces, microphones, speakers, and vehicle detection capabilities. When a customer arrives at the drive through system, the customer may communicate via a menu board or unit with an order taker to place their order. The customer then pulls around to pay and pick up the customer's order. Cameras may obtain image data of vehicles at the drive through system.
SUMMARY
One implementation of the present disclosure is a drive through system, according to some embodiments. The drive through system can include a transaction system, such as a point of sale unit, configured to be operated to implement a point of sale. The drive through system can also include multiple cameras configured to obtain image data of a lane of the drive through system. The drive through system can also include processing circuitry configured to determine, using the image data, a characteristic of a vehicle at the lane, and operate the point of sale unit to provide an indication of the characteristic of the vehicle. The processing circuitry may also be configured to, responsive to detection of one or more of multiple vehicles using the image data, perform at least one of (i) initiating an alert via the point of sale unit for staff of the drive through system, (ii) initiating an action to open an additional lane, or (iii) initiating a kitchen action to prepare one or more food or beverage items.
The characteristic of the vehicle may include a type of the vehicle selected from multiple predetermined types of vehicles using the image data, and a color of the vehicle. The multiple predetermined types of vehicles can include at least a car, a truck, and an SUV.
The processing circuitry may be further configured to determine, using the image data from one of multiple cameras or the multiple cameras and predetermined zones, a location of the vehicle along a path of the lane. The predetermined zones may be defined for the image data of each of the multiple cameras to account for different orientations and positions of the cameras that results in image data of the predetermined zones spanning different portions of the image data of each of the cameras. The processing circuitry can also be configured to, responsive to the vehicle arriving at an order placement location of the zone, operating the point of sale unit to provide the indication of the characteristic of the vehicle.
The indication of the characteristic of the vehicle can include at least one of a textual indication of the characteristic of the vehicle or a visual representation of the characteristic of the vehicle. The drive through system can also include at least one of a point of sale unit or a display screen (e.g., notifier, alarm) at a pickup window of the drive through system viewable by an employee of the drive through system. The processing circuitry may be configured to operate at least one of the point of sale unit or the display screen at the pickup window to provide order data associated with the vehicle and the characteristic of the vehicle responsive to the vehicle arriving at the pickup window.
The processing circuitry can be configured to track multiple vehicles through the lane the drive through system using the image data obtained from the cameras. The processing circuitry can also be configured to, responsive to a number of the plurality of vehicles, at least one of initiate an alert for staff of the drive through system, initiate an action to open an additional lane, or initiate a kitchen action to prepare a quantity of food items. The processing circuitry can also be configured to operate, based on the image data, a carry-out alert device to notify carry-out staff regarding (i) the characteristic of a vehicle tracked to a carry-out location, (ii) an identification of which carry-out location of a plurality of carry-out locations at which the vehicle is positioned, and (iii) order data for the vehicle at the carry-out location.
Another implementation of the present disclosure is a method, according to some embodiments. The method may include obtaining image data of a lane of a drive through system and determining, using the image data, a visual characteristic of a vehicle at the lane. The method can also include operating a point of sale unit to provide an indication of the visual characteristic of the vehicle. The point of sale unit may be configured to be operated to implement a point of sale. The visual characteristic may include a type of the vehicle selected from multiple predetermined types using the image data, and a color of the vehicle. The method can also include, processing, at the point of sale unit, vehicle information relating to the visual characteristic of the vehicle and order information for an order receive from an occupant of a drive through to provide compiled order information. The method may also include updating multiple display devices of the drive through system to include an indication of the compiled order information during a service journey of the vehicle at the drive through.
The multiple types of vehicles can include at least a car, a truck, and an SUV. The method can also include determining, using the image data and multiple predetermined zones, a location of the vehicle along a path of the lane. The method can also include, responsive to the vehicle arriving at an order placement location of the zone, operating multiple point of sale units and at least one display screen to provide the indication of the characteristic of the vehicle to staff of the drive through system. The multiple point of sale units may be positioned at different employee stations of the drive through system and the display screen may be positioned in a kitchen or at a pickup window of the drive through system.
The indication of the visual characteristic of the vehicle can include at least one of a textual indication of the characteristic of the vehicle or a visual representation of the visual characteristic of the vehicle. The method can further include operating an alert device to provide an indication of the compiled order information responsive to the vehicle arriving at a pickup window. The alert device may be positioned at a pickup window of the drive through system and viewable by an employee of the drive through system.
The method can also include tracking multiple vehicles through the lane the drive through system using the image data obtained from multiple cameras. The method can also include, responsive to a number of the vehicles, at least one of initiating an alert for staff of the drive through system, initiating an action to open an additional lane, or initiating a kitchen action to prepare a quantity of food items. The method can also include operating a carry-out alert device to provide an indication of the compiled order information responsive to the vehicle arriving at a carry-out location. The carry-out alert device can also be configured to provide an indication of which of multiple carry-out locations at which the vehicle is located.
Another implementation of the present disclosure is a control system for a drive through, according to some embodiments. The control system includes processing circuitry, according to some embodiments. The processing circuitry may determine, using image data obtained from multiple cameras, a characteristic of a vehicle in a lane of the drive through. The processing circuitry can operate a point of sale unit to provide an indication of the characteristic of the vehicle. The processing circuitry can, responsive to detection of one or more of a plurality of vehicles using the image data, perform at least one of (i) initiating an alert via the point of sale unit for staff of the drive through, (ii) initiating an action to open an additional lane, or (iii) initiating a kitchen action to prepare one or more food or beverage items.
In some embodiments, the processing circuitry is further configured to, responsive to detecting that a vehicle has arrived at an order fulfillment location, operate a display screen at the order fulfillment location to display an initially obtained image of the vehicle to an order fulfillment staff to facilitate accurate order fulfillment. In some embodiments, the point of sale unit is a first point of sale unit of multiple point of sale units. The processing circuitry is configured to operate the point of sale units to provide the indication of the characteristic of the vehicle, and a location of the vehicle in the drive through on display screens of the point of sale units in unison, according to some embodiments. In some embodiments, the point of sale units are each configured to receive a user input to transition a corresponding one of multiple portable communications devices to communicate on an audio channel according to the user input. In some embodiments, the processing circuitry is further configured to operate a kitchen display screen to provide order information, the characteristic of the vehicle, and a location of the vehicle to kitchen staff of the drive through.
Another implementation of the present disclosure is a point of sale system for a drive through, according to some embodiments. The point of sale system can include a point of sale unit and processing circuitry. The point of sale unit can be configured to be operated to implement a point of sale and obtain order data. The processing circuitry can be configured to determine, using image data obtained from a camera, a characteristic of a vehicle at a lane of the drive through. The processing circuitry may also be configured to operate the point of sale unit to provide an indication of the characteristic of the vehicle.
The characteristic of the vehicle include a type of the vehicle selected from multiple predetermined types of vehicles using the image data, and a color of the vehicle. The multiple predetermined types of vehicles can include at least a car, a truck, and an SUV.
The processing circuitry can further be configured to determine, using the image data and multiple predetermined zones, a location of the vehicle along a path of the lane. The processing circuitry can also be configured to, responsive to the vehicle arriving at an order placement location of the zone, operate the point of sale unit to provide the indication of the characteristic of the vehicle.
The indication of the characteristic of the vehicle can include at least one of a textual indication of the characteristic of the vehicle or a visual representation of the characteristic of the vehicle. The point of sale system can include an alert device at a pickup window of the drive through system viewable by an employee of the drive through system. The processing circuitry may be configured to operate the alert device to provide the order data associated with the vehicle and the characteristic of the vehicle responsive to the vehicle arriving at the pickup window.
The processing circuitry can be configured to track multiple vehicles through the lane the drive through system using the image data obtained from multiple cameras. The processing circuitry can also, responsive to a number of the multiple vehicles, at least one of initiate an alert for staff of the drive through system, initiate an action to open an additional lane, or initiate a kitchen action to prepare a quantity of food items.
This summary is illustrative only and is not intended to be limiting. Various aspects, inventive features, and advantages of the systems described herein are set forth in more detail below.
Various aspects of the present disclosure, such as the graphical user interfaces (GUIs) shown and described, can be implemented in connection with the drive-through systems and interfaces set forth in U.S. Provisional Application No. 63/539,920 filed Sep. 22, 2023, which is incorporated herein by reference in its entirety.
The disclosure will become more fully understood from the following detailed description, taken in conjunction with the accompanying FIGURES, wherein like reference numerals refer to like elements, in which:
DETAILED DESCRIPTION
Before turning to the FIGURES, which illustrate the exemplary embodiments in detail, it should be understood that the present disclosure is not limited to the details or methodology set forth in the description or illustrated in the FIGURES. It should also be understood that the terminology used herein is for the purpose of description only and should not be regarded as limiting.
Overview
Referring generally to the FIGURES, a vision system for a drive through system can include cameras that track vehicles through their life cycle of the drive through system. The vision system may implement a machine learning technique in order to automatically identify a type and color of the vehicles in the drive through system. The vision system is integrated with a transaction system, e.g., a system of point of sale units, such that the point of sale units are updated to indicate the type and color of the vehicles. Integrating the vision system with the point of sale units facilitates improved speed and order fulfillment accuracy of the drive through system.
Drive Through System
Referring particularly to
The starting location 32 and the ending location 34 may define starting and ending points for customers, with the order lanes 20 defining different paths between the starting location 32 and the ending location 34. In some embodiments, each of the order lanes 20 defines a corresponding order zone 24 at which the customer may place an order for pickup at one of the windows 28. In some embodiments, the first lane 20a includes a first order zone 24a, the second lane 20b includes a second order zone 24b, the third lane 20c includes a third order zone 24c, and the fourth lane 20d includes a fourth order zone 24d. Each of the order zones 24 includes a corresponding drive through unit 108 (e.g., a menu board, a kiosk, a field communications unit, an order placement unit at which a customer may place an order, etc.) which facilitates communications between the customer at the order zones 24 and personnel of the store 14, as well a display screen or signage indicating available items for purchase, and order or POS information (e.g., a number of items, corresponding cost, total cost, tax, etc., for an ongoing order). In some embodiments, the first order lane 20a includes a first drive through unit 108a, the second order lane 20b includes a second drive through unit 108b, the third order lane 20c includes a third drive through unit 108c, and the fourth order lane 20d includes a fourth drive through unit 108d. In some embodiments, the order lanes 20 define one or more paths that include a series of discrete geometric forms (e.g., polygonal shapes, curvilinear shapes) mapped to different physical locations along the order lanes 20. The series of geometric forms may match between multiple cameras (e.g., cameras 114) that have different field of views in order to facilitate awareness, identification, and tracking of vehicles along the one or more paths between the starting location 32 and the ending location 34.
The drive through units 108 are interfaces that may be components of a communications system or POS system, shown as communications system 100 of the drive through system 10. In some embodiments, the drive through units 108 are integrated via one or more POS systems. The communications system 100 includes the drive through units 108, a controller 102, one or more imaging devices 114 (e.g., cameras) positioned about the lane system 16, a cloud computing system 104, one or more input devices, shown as order taking stations 110, and one or more wearable communications devices 112. In some embodiments, the one or more order taking stations 110 are disposed in each of a corresponding zone within the store 14 proximate the windows 28. The order taking stations 110 may include a touch screen or user interface configured to both display image data (e.g., a graphical user interface, a menu, selectable options for an order or POS, etc.), and receive user inputs from a corresponding employee of the store 14 to add items to an order or POS.
The wearable communications devices 112 may have the form of headphones, earpieces, etc., and can include both speakers (e.g., acoustic transducers, aural output devices, etc.) and microphones (e.g., aural input devices). In some embodiments, the controller 102 is communicably coupled with each of the order taking stations 110 at the windows 28, the imaging devices 114, and the head wearable communications devices 112. The controller 102 may also be communicably coupled with a cloud computing system 104 and can upload or provide various information regarding points of sale to the cloud computing system 104 for analysis. In some embodiments, the controller 102 is configured to receive, from the cloud computing system 104, software or firmware updates for various devices of the communications system 100. In some embodiments, the wearable communications devices 112 may be portable communications devices including but not limited to headphones, earphones, earbuds, devices configured to clip to a belt or article of clothing, ear pieces, etc., any device having at least one of a speaker or microphone and configured to be worn, held, or otherwise move with an individual to establish an end point for audio communications (e.g., to at least one of produce sound via a speaker, or receive a sound input via a microphone). It should be understood that the components of the communications system 100 (e.g., the drive through units 108, the controller, the one or more imaging devices 114, the cloud computing system 104, the order taking stations 110, and the wearable communications devices 112) may optionally be integrated with each other via a POS (e.g., all such components or a subset of the foregoing components). Integrating the components of the communications system 100 via the POS may facilitate improved efficiency of the drive through, for example.
The communications system 100 provides selective communicability according to multiple channels between any of the personnel (e.g., personnel 40a at the first window 28a, personnel 40b at the second window 28b, and personnel 30c at the third window 28c) and customers at one of the order zones (e.g., the first order zone 24a, the second order zone 24b, the third order zone 24c, and the fourth order zone 24d) via the wearable communications devices 112 and the drive through units 108. In some embodiments, each of the drive through units 108 is configured to communicate according to a corresponding channel (e.g., a first order channel, a second order channel, a third order channel, and a fourth order channel) and the personnel at the windows 28 can selectively establish communicability with the customers at the different order lanes 20 by selectively switching between the channels such that the wearable communications devices 112 are communicatively coupled on a selected one of the channels. In some embodiments, the communications system 100 also facilitates private communication between one or more of the personnel 40 within the store 14 or at other remote locations (e.g., a service center). For example, the personnel may include kitchen personnel 40d that may communicate with any of the personnel 40a, the personnel 40b, or the personnel 40c at the windows 28 via one or more remote channels. In some embodiments, the kitchen personnel 40d may also listen on one or more of the channels on which the customers at the zones 24 can communicate.
Referring particularly to
The drive through unit 108 may also include a microphone 122 configured to capture audio (e.g., spoken audio) from the customer and transmit data (audio data, audio signals) to the personnel 40 that is taking the customer's order via a corresponding channel of the communications system 100 for the order lane 20 of the order zone 24. In some embodiments, the drive through unit 108 also includes a speaker 120 configured to provide audio data of the voice or spoken words of the personnel 40 obtained by the wearable communication device 112 that is communicating on the channel of the drive through unit 108. In this way, the communications unit 108 may provide a customer end for audio communications and the wearable communications device 112 may provide a personnel end for audio communications between the customer and one of the personnel 40 (e.g., according to an audio or communications channel). In some embodiments, the at least one pressure or force sensor 106, the speaker 120, the microphone 122, the imaging device 114, and the display screen 118 are configured to communicate directly with the controller 102 of the communication system 100. In some embodiments, the pressure or force sensor 106, the speaker 120, the microphone 122, the imaging device 114, and the display screen 118 are configured to communicate with a zone controller 124 that communicates with the controller 102.
Referring again to
Advantageously, the vision system 200 may track vehicles through the drive through system 10 and integrate with POS systems of the drive through system 10. The vision system 200 may facilitate improved order taking, speed of service, and fulfillment accuracy. The vision system 200 integrates directly with the POS units such that graphical user interfaces of the POS units may be updated in real-time based on tracking of vehicles 30 through the drive through system 10 by the vision system 200.
Vision System Offline Setup
Referring to
The setup device 302 may obtain satellite image data of the drive through system 10 from the satellite 306. The satellite image data may indicate different lanes of traffic, emergency lanes, parking lots, etc., of a store or restaurant. The setup device 302 is configured to operate the user interface 304 in order to display the satellite image data to a user, according to some embodiments. In some embodiments, the setup device 302 is configured to operate the user interface 304 to prompt the user to provide inputs indicative of one or more paths of the drive through system 10. The paths of the drive through system 10 may correspond to different lanes of traffic for the store 14. The user may provide to the setup device 302, via the user interface 304, one or more points indicating locations along a first path. The user may provide points corresponding to multiple different paths (e.g., a second path, a third path, etc.), corresponding to a number of lanes of traffic or routes. In some embodiments, the setup device 302 is configured to receive the points for each of the one or more paths and perform a curve fitting technique in order to generate the path based on the points. For example, the setup device 302 may implement a variety of curve fitting techniques based on the points, including but not limited to Bezier curves, B-spline curves, Catmull-Rom splines, interpolating Lagrange curves, etc. In some embodiments, the setup device 302 uses the point to generate paths or curves including control points. The control points may correspond to the points provided by the user via the user interface 302 that the setup device 302 uses to generate the curves or paths. In some embodiments, the points provided to the setup device 302 are global positioning system (GPS) coordinates such as latitude and longitude which are used by the setup device 302 to define the curves.
Once the paths or curves for each lane of traffic are determined by the setup device 302, the setup device 302 may operate the user interface 304 to display the paths or curves. The setup device 302 may also receive updates to the curves or paths as user inputs from the user interface 304. The updates to the curves or paths (e.g., the Catmull-Rom splines) may include the addition of or subtraction of control points, the adjustment (e.g., movement) of control points, changing an amount or direction of a tangent of one or more of the control points, etc.
The curves may generally define the lanes of traffic of vehicles 30 through the drive through system 10 and facilitate tracking the vehicles 30 through the drive through system 10 (e.g., from starting location 32 to the ending location 34). In the example drive through system shown in
In some embodiments, the setup device 302 is configured to determined, based on the paths or curves defined by the user for the drive through system 10, one or more suggested locations for the cameras 114. If the cameras 114 are already installed such that the cameras 114 can obtain image data from a variety of locations of all of the paths or curves, the setup device 302 may proceed to generation and customization of different tracking zones along the paths or curves. In some embodiments, the setup device 302 is configured to select from a variety of different cameras and determine, based on the cameras fields of view, positions and orientations for multiple cameras such that image data of all of the paths or curves can be obtained. In some embodiments, the setup device 302 is configured to provide, via the cloud computing system 104, the satellite image data and the curves or paths superimposed on the satellite image data, to a system administrator of the setup system 300. The system administrator (e.g., a manufacturer of the vision system 200) may provide recommended locations, orientations, and types of cameras for the drive through system 10. The system administrator may further provide, to the setup device 302 via the cloud computing system 104, recommended locations, orientations, and types of cameras for the drive through system 10 based on an on-site or remote inspection, according to some embodiments.
The setup device 302 may be configured to determine multiple zones (e.g., geometric shapes, areas, etc.) along the paths or curves. The setup device 302 may automatically determine the zones along the paths or curves and operate the user interface 304 to display the satellite image data, the paths or curves, the points, and the zones superimposed on the satellite image data. In some embodiments, the user interface 304 is configured to use selections, provided by the user interface 304 as user inputs, and provide different image data from corresponding cameras 114 to the user via the user interface 304 in response to the selections. In some embodiments, the setup device 302 is configured to display the image data to the user interface 304 such that the user may define the zones.
The user interface 304 is configured to receive user inputs to define points in the image data of each of the cameras 114 to define zones along the curves or paths. In some embodiments, the setup device 302 is configured to receive four points for each zone in the image data. The setup device 302 may receive four points for each of the zones, and allow, via operation of the user interface 304, the user to define zones for all of the lanes of traffic that are visible in the image data of one of the cameras 114. After the user has defined the zones for the image data of a first camera 114, the user may select a second camera 114, and proceed to define correspondingly numbered zones for the image data of the second camera 114. This process may be repeated by the user via the user interface 304 and the setup device 302 until all of the zones have been defined for the image data of each of the cameras 114.
The user may be prompted to define the zones and assign corresponding numbers or labels in a uniform manner across the image data from different cameras 114. For example, if a zone A is defined in front of a pickup window, then the zone A should be in front of the pickup window for the image data across different cameras 114. Advantageously, the zones can be used by the controller 102 (e.g., in the profile) in order to track the vehicle 30 through the drive through system 10.
Referring still to
The setup device 302 may also receive user inputs from the user interface 304 defining locations of interest along the paths, curves, or routes. The locations of interest may include an arrival location, a drive through entry location, an order lane entry location, an order location, an order lane exit location, a payment location, an order pickup location, an exit location, etc. In some embodiments, the locations are stored by the setup device 302 and used in order to determine the setup profile. The locations may be point locations or may be tied to corresponding zones. For example, the locations may be labels associated with corresponding zones (e.g., zone 501 is the order pickup zone).
The setup device 302 is configured to provide the setup profile to the controller 102 for use in operating the drive through system 10, according to some embodiments. In some embodiments, the setup profile includes (i) the paths, routes, or curves for the different lanes of traffic (e.g., entry lanes, parking lot lanes or routes, order lanes, pickup lanes, emergency lanes, exit lanes, etc.), (ii) the zones along the lanes of traffic (e.g., a series of geometric shapes that follow the lanes of traffic) for the satellite image data, and the image data from each of the cameras 114, (iii) one or more locations of interest, (iv) and an image analysis technique or set of instructions. In some embodiments, the image analysis technique is provided in the form of a neural network, an artificial intelligence, a machine learning technique or set of instructions, etc. The image analysis technique may be performed by the controller 102 locally such that the controller 102 can use the image data provided by the cameras 114 in order to detect a type and color of vehicle. In some embodiments, performing the image analysis technique locally at the controller 102 of the drive through system 10 facilitates ensuring privacy of the image data.
Referring to
The process 400 includes obtaining map data of a drive through location (step 402), according to some embodiments. In some embodiments, the map data of the drive through location is satellite image data. In some embodiments, the map data of the drive through location is obtained by the setup device 302 from the cloud computing system 104 or the satellite 306. The map data may otherwise be obtained from a database, a maps service, a satellite image database, a drone that is flown around the drive through location, etc.
The process 400 includes defining a route of the drive through location (step 404), according to some embodiments. In some embodiments, step 404 includes operating a user interface, display screen, computer screen, tablet, touch screen, etc., in order to display the map data obtained in step 402. Step 404 may also include allowing the user, via a user interface, to provide one or more points that define the route. The points may be geographic (e.g., GPS) coordinates and may define different locations along a lane of traffic. In some embodiments, step 404 is performed by the setup device 302. The setup device 302 may implement a curve-fitting technique (e.g., Catmull-Rom) based on the points defined by the user in order to determine the define the route. In some embodiments, step 404 includes defining multiple routes by allowing the user to input, via a user device, multiple points for each of multiple routes. For example, the different routes may correspond to different lanes of traffic (e.g., multiple order lanes, emergency lanes, parking lot locations or lanes of traffic, pickup lanes, entry lanes, exit lanes, etc.).
The process 400 includes identifying locations for imaging devices along the route (step 406), according to some embodiments. In some embodiments, step 406 is optional. For example, if cameras are already installed at the drive through location, then step 406 may be unnecessary. In some embodiments, step 406 is performed automatically by the setup device 302 based on an identification of the routes, possible mounting locations, as well as camera models, fields of view, etc. In some embodiments, step 406 is performed by a manufacturer or installation service of a vision system for the drive through location. For example, the map data and the route(s) obtained in steps 402-404 may be provided to the manufacturer or installation service. The manufacturer or installation service may provide suggestions regarding the locations, orientations, and models of the imaging devices (e.g., cameras) in order to obtain image data of the entirety of the route from multiple locations.
The process 400 includes installing imaging devices at the identified locations along the route (step 408), according to some embodiments. In some embodiments, step 408 includes installing imaging devices (e.g., cameras) at the locations identified in step 406 about the drive though location. Step 408 may include installing posts or poles and imaging devices at the top of the posts or poles for locations where additional imaging devices are required. In some embodiments, step 408 is optional if the drive through location already has imaging devices installed on the premises. Step 408 may be performed by the manufacturer of the vision system for the drive through location, an installation service, etc.
The process 400 includes defining zones along the route in image data obtained from the imaging devices (step 410), according to some embodiments. In some embodiments, step 410 is performed by the setup device 302 based on inputs from a user (e.g., via the user interface 304). For example, step 410 may include defining multiple zones (e.g., geometric shapes, areas, etc.) along the routes determined in step 404 for each of multiple imaging devices. In some embodiments, step 410 is performed by first prompting the user to define multiple zones or locations along the routes for the map data of the drive through location. In some embodiments, the setup device 302 is configured to automatically map the zones defined by the user (or automatically by defining zones along the routes) in the map data to the image data of the imaging devices. In some embodiments, the setup device 302 is configured to use one or more of a location, orientation, height, field of view, etc., of the imaging devices in order to automatically identify preliminary boundaries (e.g., edges, corners, etc.) for the zones in the image data of the imaging devices. The zones along the route may be definable by the user for the image data of the imaging devices and/or may be adjustable if an initial definition of the zones is provided automatically. The setup device 302 may prompt the user to define the zones in a manner that is consistent across the image data obtained from the imaging devices. For example, if a zone A is defined at a first order location (e.g., an order placement location in front of the drive through unit 108) in image data obtained from a first imaging device, and the image data obtained from a second imaging device includes the first order location from a different perspective, the user may be prompted to define a zone at the first order location in the image data obtained from the second imaging device as zone A. In this way, the zones may be consistently defined in a uniform manner between the image data obtained from multiple imaging devices in order to account for different perspectives or views of a same location along the routes. Step 410 may be performed for any number of routes that are defined in step 404.
The process 400 includes defining locations of interest along the route (step 412), according to some embodiments. In some embodiments, step 412 is performed by the setup device 302 by prompting the user to input, via the user interface 304, one or more locations of interest. The locations of interest may include an entry location, an exit location, a location with a best view of an incoming vehicle for use in image analysis, an entry to a first order lane, an entry to a second order lane, an entry to a third order lane, an entry to a fourth order lane, a first, second, third, or fourth order location, an exit to any order lane, a first window location, a second window location, a pickup order location, etc. In some embodiments, step 412 is performed by labelling one or more of the zones defined in step 410 along the route as any of the locations. In this way, identification of a vehicle in one of the zones may result in identification that the vehicle is at a location of interest (e.g., that a vehicle has pulled up to a first order location). The locations of interest may be used in order to integrate the vision system with a POS unit. The locations of interest may also be used in order to identify performance parameters of the drive through location (e.g., service time, total life cycle time of a vehicle in the drive through location, etc.).
The process 400 includes masking the image data obtained from the imaging devices (step 414), according to some embodiments. In some embodiments, step 414 is performed by the setup device 302 and the user interface 304. For example, the setup device 302 may operate the user interface 304 to provide image data of the imaging device and allowing the user to define one or more zones or areas of the image data within which vehicles are unlikely to be detected. The user interface 304 may provide a pen tool, an eraser tool, a paintbrush tool, etc., for the user to define the zones or areas of the image data as the masks. The masks defined in step 414 generally define regions of the image data obtained from each of the imaging devices that should not be used or considered by the vision system when tracking vehicles through the drive through location. For example, if the image data from the imaging device includes surrounding areas (e.g., sidewalks, background image data, surroundings, etc.) where a vehicle is unlikely to be detected, the mask may remove the corresponding regions of the image data from consideration in image analysis and tracking. In some embodiments, the masks indicate areas of the image data that should be considered by the image analysis and tracking techniques (e.g., an area of interest). The masks can facilitate speed and accuracy of the image analysis and tracking techniques performed on the image data obtained from the imaging device. In some embodiments, the masks define areas or regions that have extraneous image data.
The process 400 includes determining a profile for the drive through location (step 416), according to some embodiments. In some embodiments, the profile is a configuration file or dataset for a controller (e.g., the controller 102) of the drive through location. The profile may include the map data obtained in step 402, the route(s) defined in step 402, the locations and orientations of each of the imaging devices, the zones defined in step 410, the locations of interest defined in step 412, the masks defined in step 414, and an image analysis technique. The profile may be location-specific for the particular drive through location and accounts for the unique camera placement, lanes of traffic shapes, distances, curvatures, the parking lot size, overall store arrangement, order locations, pickup locations, etc., of the specific drive through location for which process 400 is performed. In some embodiments, the profile is a locally-executable file for the controller of the drive through location. The profile may configure the controller of the drive through location to perform one or more image analysis and vehicle tracking techniques as desired. In some embodiments, the profile utilizes one or more of a neural network, a machine learning technique, an artificial intelligence tool, etc., that is provided by the manufacturer of the vision system described herein that can be implemented locally on the controller of the drive through location. If updates to the profile are desired, a drive through location administrator may re-initiate any of steps 402-416 in order to account for updates to lane configuration, construction at the store, closing of order lanes, etc. Step 416 may be performed by the setup device 302 by providing the profile to the controller 102.
The process 400 includes operating a POS unit of the drive through location using a vision system that implements the profile (step 418), according to some embodiments. In some embodiments, step 418 is implemented by the controller 102 of the drive through location. For example, the controller 102 may implement the profile and identify incoming customers' vehicles and track the customers vehicles through the various lanes of traffic (e.g., along the routes). The controller 102 may track the vehicles as the vehicles travel along the routes and pass from one zone to the next along the routes. In response to the vehicle approaching to an order location, the controller 102 may operate a POS unit (e.g., associated with an order taker) to notify the order taker that a customer has arrived at the order location. The POS unit may be operated to prompt the order taker to initiate an order for the customer at the order location, and may include providing information regarding the customer at the order location (e.g., color of vehicle, type of vehicle, etc.). Step 418 may also include operating a display screen or an alert device (e.g., an audiovisual system, an alarm, an LED display, a speaker configured to provide an audible notification) to drive through personnel fulfilling orders to customers that pull up to an order pickup location. For example, step 418 may include operating the display screen to notify personnel regarding the POS data, as well as the type of the vehicle and the color of the vehicle as the vehicle is identified at the order pickup location. Advantageously, notifying the personnel who fulfill orders to the customers regarding the POS data, as well as identifying characteristics of the vehicle associated with the POS data improves order fulfillment accuracy and reduces a likelihood that the wrong order is provided to the customers (e.g., reducing order mix-ups).
Referring to
Referring to
The GUI 600 also includes a toolbar 622 including a camera selector 624 (e.g., buttons, a drop-down menu, selectable options, etc.) and a lane selector 626, according to some embodiments. In some embodiments, the lane selector 626 allows the user to select between multiple lanes to edit a series of zones 604 or zones 606 corresponding to the selected lane. In the example shown in
Referring still to
In some embodiments, the GUI 600 includes a first side pane 628 (e.g., an information display area) corresponding to the mask 608. The first side pane 628 displays information corresponding to points or pixels of the boundary of the mask 634. The GUI 600 includes a second side pane 630 (e.g., an information display area) corresponding to the first lane and the zones 604 of the first lane. In some embodiments, the second side pane 630 illustrates which pixels or points of the corresponding camera 114 define the zones 604 (of the first lane). In some embodiments, the GUI 600 includes a third side pane 632 that illustrates which pixels or points of the corresponding camera 114 define the zones 606 (of the second lane). In some embodiments, the information displayed in the first side pane 628, the second side pane 630, and the third side pane 632 are unique for each of the cameras 114. The information displayed in the first side pane 628, the second pane 630, and the third side pane 632 may be a part of the setup profile that is provided from the setup device 302 to the controller 102 for each camera 114 at the restaurant 14.
Referring to
Referring to
The exit lane 802d may be an escape or emergency lane to provide a point of egress from the pickup lane 802c. The exit lane 802d may be defined from a point of the pickup lane 802c to a road or exit location surrounding the store 14. In some embodiments, the return lane 802f includes the return path 804f that wraps around from the pickup lane 802c to one or more parking spots 808 at a front of the store 14. The parking spots 808 may each include corresponding zones for parking to await a carry out order. In some embodiments, the parking spots 808 are positioned at a pickup location 806 at a front of the store 14.
It should be understood that the vision system 200 that is setup for the store 14 as shown in
Online Implementation
Referring to
The POS units 902 may be positioned at any windows (e.g., windows 28) of a restaurant or store. For example, both pickup windows at which orders are fulfilled and payment windows at which customers pay for their items may include corresponding POS units 902, in addition to other display screens. In another example, pickup windows (e.g., an order fulfillment window) may include both a display screen (e.g., a kitchen display screen, a tablet, a fulfillment display screen, etc.) and a POS unit 902. The display screens and the POS units 902 may be operated by the controller 102 based on the image data from the cameras 114 in order to provide characteristics of the vehicles 30 (e.g., type and color) according to one or more modalities (e.g., textual information, visual information such as icon that represent the characteristics of the vehicles 30). The display screens and the POS units 902 may be operated based on tracking the vehicles 30 in a virtual space based on the zones that are defined when setting up the profile for the image data of each of the cameras 114. As described in greater detail above, the zones may be defined differently for the image data of each camera 114 in order to account for different perspectives, orientations, positions, fields of view, etc., of the cameras 114 which results in the image data of the zones spanning different areas of the image data (e.g., different pixel regions). The definition of the zones accounts for the different locations, sizes, etc., of the image data of the zones in the image data of the cameras 114 such that the controller 102 can consistently track vehicles 30 through the drive through system 10 from camera to camera. The definition of the zones facilitates seamless tracking of the vehicles 30 through the drive through system 10 despite different perspectives and positioning of the camera 114.
The controller 102 may obtain the image data from each of the cameras 114 in real-time. In some embodiments, when a vehicle 30 first arrives at the drive through system 10, the controller 102 may use an initial image of the vehicle 30 to determine (1) a color of the vehicle 30, and (2) a type of the vehicle 30. In some embodiments, the controller 102 is configured to implement an image analysis technique using the initial image of the vehicle 30 to determine (1) the color of the vehicle 30, and (2) the type of the vehicle 30. In some embodiments, the controller 102 includes different predetermined colors such as red, blue, orange, black, white, green, etc. In some embodiments, the controller 102 is configured to determine, using the image analysis technique and the initial image of the vehicle 30, which of the predetermined colors to which the color of the vehicle 30 corresponds. In some embodiments, the controller 102 includes different predetermined types of vehicles such as car, truck, SUV, Jeep, etc. In some embodiments, the controller 102 is configured to determine, using the image analysis technique and the initial image of the vehicle 30, which of the predetermined types of vehicle to which the type of the vehicle 30 corresponds.
In some embodiments, the controller 102 is configured to use the paths and zones along the paths in the setup file to track location of the vehicle 30 throughout a lifecycle of the vehicle 30. In some embodiments, the controller 102 is configured to track which of the zones the vehicle 30 travels along or at which the vehicle 30 is currently located. For example, the controller 102 may identify that the vehicle 30 is traveling along the first order lane path 804a by detecting the vehicle 30 at zones along the first order lane path 804a. In some embodiments, the controller 102 is configured to use the image analysis techniques and one or more locations of interest (e.g., tagged or identified zones) to determine if the vehicle 30 has arrived at a specific location. The controller 102 may use the locations of interest (e.g., the tagged or identified zones) and the detection of the vehicles 30 at the locations of interest to operate corresponding POS units 902, or the alert devices 904. For example, in response to detecting that the vehicle 30 has arrived at a first order location, the controller 102 may operate corresponding POS units 902 to notify staff (e.g., order taking staff) that the vehicle 30 is awaiting service at a particular drive through unit 108. The controller 102 may operate the POS units 902 to notify the order taking staff regarding the color and the type of the vehicle 30 that is at the particular drive through unit 108. In this way, the vision system 200 (e.g., the image data of the cameras 114, the image analysis and tracking techniques performed by the controller 102, etc.) can be integrated with the POS units 902 of the store 14. The POS units 902 may be display screens, the order taking stations 110, etc.
In some embodiments, the POS units 902 may be operated by the order taking staff. POS data resulting from the POS units 902 may be provided to the controller 102. The controller 102 may operate alert devices 904 responsive to at least one of the POS data obtained from the POS units 902, or based on results of the image analysis techniques performed using the image data.
In some embodiments, the controller 102 is configured to operate a pickup alert device 908 or a carry-out alert device 910. In some embodiments, the pickup alert device 908 is a display screen positioned proximate a window at which the vehicle 30 pulls up or arrives at in order to receive their order. The controller 102 may track the vehicle 30 to the window, and in response to the vehicle 30 arriving at the window, operate the pickup alert device 908 to display the color of the vehicle 30 and the type of the vehicle 30 that is at the window. In some embodiments, the controller 102 is also configured to operate the pickup alert device 908 to display POS data such as an order number, items in the order, a name of the customer in the vehicle 30, etc. In this way, the POS data of the customer in the vehicle 30 may be tied to the detected color and type of the vehicle 30, which may be displayed to staff as the staff fulfill the order to the customer in the vehicle 30. Advantageously, displaying POS data (e.g., order information) in combination with characteristics of the vehicle 30 to receive the order (e.g., the type and color of the vehicle 30) facilitates accurate order fulfilment and reduces a likelihood of the incorrect order or items being handed to customers.
Referring still to
Referring still to
The controller 102 may also be configured to count a number of customers (e.g., a number of vehicles 30) that are in line awaiting order. In some embodiments, the number of customers that are in line at the drive through system 10 awaiting their turn to place an order is referred to as a “stack size.” In some embodiments, during busy times of day, the controller 102 may identify that a number of vehicles 30 have wrapped around the store 14, into a parking lot, onto the street, etc. The controller 102 may use the image data provided by the cameras 114 of surrounding areas of the store 14 (e.g., the parking lot, an adjacent road, an entrance to the parking lot, etc.), and determine the stack size of the drive through system 10. In some embodiments, the controller 102 is configured to record a time of day and corresponding stack size. The time of day and corresponding stack size may be used by the controller 102 in order to track busy times of day and initiate preparatory actions for preparing food and/or beverage items before the busy times of day.
In some embodiments, the controller 102 is configured to operate one or more kitchen alert devices 906 responsive to the stack size. In response to detecting a stack size above a threshold using the image data provided by the cameras 114, the controller 102 may initiate cooking operations by prompting kitchen staff to prepare food and/or beverage items. The controller 102 may initiate the cooking operations or preparatory actions (e.g., a kitchen action) by operating the one or more kitchen alert devices 906. In some embodiments, the controller 102 uses a predetermined set of instructions based on the stack size to determine which preparatory actions to initiate. For example, if the controller 102 includes historical data that 90% of customers typically order a specific type of food and/or beverage item that requires a significant amount of time to prepare, the controller 102 may prompt the kitchen staff, based on the stack size, to initiate preparation of a corresponding amount of the food and/or beverage item. In this way, the controller 102 may initiate kitchen options using the image data provided by the cameras 114. In some embodiments, the controller 102 is configured to operate the POS unit(s) 902 or a manager display screen 914 to display the stack size of the drive through system 10.
Referring to
Memory 1006 (e.g., memory, memory unit, storage device, etc.) can include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present application. Memory 1006 can be or include volatile memory or non-volatile memory. Memory 1006 can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present application. According to some embodiments, memory 1006 is communicably connected to processor 1004 via processing circuitry 1002 and includes computer code for executing (e.g., by processing circuitry 1002 and/or processor 1004) one or more processes described herein.
In some embodiments, controller 102 is implemented within a single computer (e.g., one server, one housing, etc.). In various other embodiments controller 102 can be distributed across multiple servers or computers (e.g., that can exist in distributed locations). For example, controller 102 can be made up of multiple controllers or microcontrollers that individually or collectively are configured to carry out operations as described herein.
The controller 102 is configured to obtain the setup profile from the cloud computing system 104, and store the setup profile in the memory 1006, shown as setup profile 1020. The controller 102 may use the setup profile in order to implement the techniques described herein, or to configure the controller 102 to implement the techniques as described herein. In some embodiments, the controller 102 is also configured to provide POS data and performance metrics to the cloud computing system 104. The controller 102 may also receive POS data or performance metrics of other drive through systems 10 from the cloud computing system 104.
The controller 102 includes a database 1008, an order taking manager 1010, a communications manager 1012, and a vision system manager 1030, according to some embodiments. The database 1008 may be configured to store POS data, menu data, performance data, historical metrics of busy times, inventory or stock, personnel identification, etc. In some embodiments, the order taking manager 1010 is configured to retrieve menu data from the database 1008 for use in constructing a GUI of the POS units 902, or to update the GUI of the POS units 902 (e.g., when an item is out of stock, when a price of the item has changed, etc.). In some embodiments, the order taking manager 1010 is configured to implement any of the techniques as described in U.S. application Ser. No. 17/325,871, now U.S. Pat. No. 11,244,681, filed May 20, 2021, the entire disclosure of which is incorporated by reference herein.
Referring still to
The vision system manager 1030 is configured to integrate the vision system 200 (e.g., the cameras 114) with the POS units 902. The vision system manager 1030 may be configured to implement an image analysis technique 1014 using the image data obtained from the cameras 114. In some embodiments, the image analysis technique 1014 is implemented using a neural network, a machine learning technique, a deep learning technique, an artificial intelligence, etc. The image analysis 1014 is configured to detect the presence of a vehicle in the image data provided by the cameras 114 as well as a color and type of the vehicle.
The vision system manager 1030 also includes a vehicle tracker 1028. The vehicle tracker 1028 is configured to use the setup profile 1020 (e.g., the zones of the setup profile 1020) and track the vehicles detected by the image analysis 1014 through the drive through system 10. In particular, the vehicle tracker 1028 may use the detection of the color and type of the vehicle 30 provided by the image analysis 1014 in combination with a map including the zones. The vehicle tracker 1028 may output notifications of the vehicle 30 traveling into zones of interest. The vehicle tracker 1028 is configured to track position, speed, change in position, etc., of the vehicles 30 in the drive through system 10.
The GUI manager 1016 is configured to use the tracking of the vehicles 30 provided by the vehicle tracker 1028 and the color and type of the vehicles 30 provided by the image analysis 1014 to generate or adjust the GUI that is provided to the POS units 902. In particular, when a vehicle arrives at one of the drive through units 108 as identified by the vehicle tracker 1028, the GUI manager 1016 may update a corresponding icon of the GUI that is displayed on the POS units 902 to visually indicate the color and type of the vehicle 30. The GUI manager 1016 may also adjust the GUI of the POS units 902 such that the POS units 902 display textual information of the color and type of the vehicle 30 that is present at the drive through units 108. The order takers (e.g., users of the POS units 902) may select corresponding drive through units 108 via the POS units 902 in order to switch their wearable communications device 112 to the drive through unit 108 at which the vehicle 30 is detected.
In some embodiments, the order takers may initiate an order taking process for the customer of the vehicle 30 (e.g., by adding items to the order). Once the order taking process is complete, the POS unit 902 may provide the POS data (e.g., order information, order number, etc.) which is provided to the database 1008 and the vehicle tracker 1028. The vehicle tracker 1028 may record an identifier of the POS data and tie the POS data with the vehicle 30. In this way, as the image analysis 1014 continues to use newly obtained image data to identify the vehicle 30 in the image data, and the vehicle tracker 1028 tracks the vehicle 30 through the drive through system, the POS data (e.g., order data, customer name, items in order, total cost, etc.) may be linked with the vehicle 30. Responsive to the vehicle tracker 1028 indicating that the vehicle 30 has arrived at a pay window or a pickup window (e.g., the first window 28a, the second window 28b, etc.), the GUI manager 1016 may operate POS units 902 to indicate that the vehicle 30 (with the linked POS data) has arrived at the pay window or the pickup window. Advantageously, the POS units 902 are integrated with the vision system 200 such that the POS units 902 can be updated responsive to tracking of vehicles 30 through the drive through system 10. Similarly, the vehicle tracker 1028 may track vehicles through the drive through system 10 while linking the POS data of the vehicle 30 with the tracking. The GUIs that are generated by the GUI manager 1016 based on results of the vision system manager 1030 using the image data may be any of the GUIs described in greater detail below with reference to
In some embodiments, the GUI manager 1016 is also configured to generate a manager GUI and operate the manager display 914 to provide the manager GUI. The manager GUI may be any of the GUIs described in greater detail with reference to
In some embodiments, the POS data includes order data that may be selected and viewed by selecting the icons of the manager GUI. In some embodiments, the manager GUI is generated by the GUI manager 1016 using results of the vehicle tracker 1028, the image analysis 1014, the POS data, and the setup profile 1020. The manager GUI may include a visualization (e.g., textual, an icon, etc.) of a stack size of vehicles 30 in the drive through system 10. The stack size indicates a number of vehicles 30 that are beyond a certain point (e.g., upstream of the drive through units 108, upstream of an entrance of the drive through system 10, etc.) or between other vehicles 30 in the drive through system 10. The manager GUI may also display various performance metrics or characteristics of the drive through system 10. The manager GUI may also display vehicles 30 at a carry-out (take-out, pick-up) location (e.g., that have been ordered by staff to pull to a parking spot) and vehicles awaiting a mobile pickup order.
The vision system manager 1030 includes a performance manager 1026 that is configured to use results of the vehicle tracker 1028 to determine one or more performance characteristics of the drive through system 10. In some embodiments, the performance manager 1026 is configured to identify, based on the results of the vehicle tracker 1028, when a vehicle arrives at an entry point of the drive through system 10. The performance manager 1026 may record a time at which each vehicle arrives at the entry point. The time at which the vehicle 30 arrives at the entry point may be linked to the particular vehicle 30. The performance manager 1026 may also record a time at which the vehicles 30 each arrive at one of the drive through units 108. The performance manager 1026 may also record a time at which the customers in the vehicles 30 begin their order (e.g., once the order taker switches to communicating via the drive through units 108). The performance manager 1026 can also record a time at which the customers in the vehicles 30 complete their order. The performance manager 1026 may also record a time at which the customers in the vehicles 30 arrive at a first window (e.g., a payment window) and a second window (e.g., a pickup window). The performance manager 1026 may also record a time at which the customers in the vehicles 30 have their orders fulfilled. In some embodiments, the performance manager 1026 is configured to use any of the recorded times to estimate various quantities of elapsed time. For example, the performance manager 1026 may estimate a total amount of time that it takes each vehicle 30 to enter the drive through system 10, place their order, pick up their order, and exit the drive through system 10. The performance manager 1026 may determine an average amount of time it takes a vehicle to arrive, order, receive their order, and exit the drive through system 10. In some embodiments, the performance manager 1026 is configured to track any of a number of vehicles that are commanded to pull over to a pickup location, a number of vehicles 30 that leave the drive through system 10 without ordering (e.g., due to the line being too long), an average amount of time that a customer waits at the drive through units 108 to begin placing an order, an average amount of time it takes a customer to place their order once initiated, number of vehicles 30 in the drive through system 10, a number of customers served so far for the day, etc.
The performance manager 1026 may be configured to provide any of the performance characteristics or metrics described herein to the cloud computing system 104. The performance manager 1026 may receive performance metrics or rankings of the store 14 from the cloud computing system 104. For example, the ranking may indicate which place the store 14 is in relative to other stores in the area or in a chain. The ranking of the store 14 may be determined based on any of the performance metrics or characteristics described herein (e.g., average time for a customer to enter the drive through system 10, place their order, receive their order, and leave the drive through system 10).
In some embodiments, the performance manager 1026 is configured to provide any of the performance characteristics or metrics of the drive through system 10 to the GUI manager 1016 for display on the POS units 902 or the manager display 914. In some embodiments, the performance manager 1026 is configured to store any of the performance metrics or characteristics of the drive through system 10 in the database 1008. The performance manager 1026 may also provide ranking of the drive through system 10 to the GUI manager 1016 for display on the POS units 902 or the manager display 914.
The vision system manager 1030 may include a prediction engine 1024, an action manager 1022, and an alert manager 1018, according to some embodiments. In some embodiments, the prediction engine 1024 is configured to use historical (e.g., seasonal historical data) of demand at the drive through 10 to predict actions that should be taken by kitchen staff or personnel of the drive through system 10. For example, the prediction engine 1024 may predict busy times, and provide the results of the predicted busy times of the store 14 to the action manager 1022. The action manager 1022 is configured to use the results of the prediction engine 1024 in order to identify actions that should be taken to prepare for busy times of the store 14, and prompt the alert manager 1018. The alert manager 1018 is configured to provide alerts to the alert devices 904 to notify staff of the store 14 regarding the identified actions. Identified actions may include preparation of certain food and/or beverage items, an amount of food and/or beverage items to be prepared, sending out line busters with handheld POS units, opening new lanes, etc.
The action manager 1022 may use outputs from the prediction engine 1024 and may also use outputs from the performance manager 1026. In some embodiments, outputs of the performance manager 1026 include the stack size, and indicate real-time or current data of the drive through system 10 as opposed to predicted data. In some embodiments, the action manager 1022 may defer to using results of the performance manager 1026 which indicate real-time performance metrics as opposed to using the outputs of the prediction engine 1024. For example, if the prediction engine 1024 predicts, based on historical data, that there is a likelihood that ten customers are present in the drive through system 10 at a current time, but the image data indicates that twenty-five customers are present in the drive through system 10 (e.g., as indicated by the outputs of the performance manager 1026), the action manager 1022 may use the results of the image data instead of the outputs of the prediction engine 1024. In some embodiments, the action manager 1022 is configured to use a machine learning technique in order to determine one or more cooking or kitchen operations that should be performed responsive to number of customers in the drive through system 10. For example, the action manager 1022 may use a known percentage of customers that will order a specific type of food and/or beverage item, and initiate preparation of a corresponding amount of the specific type of food and/or beverage item based on the number of customers in the drive through system 10.
The action manager 1022 may also determine if it is appropriate to open an additional lane based on the current number of customers. In some embodiments, the action manager 1022 is configured to observe the average amount of time for a customer to enter the drive through system 10, place their order, receive their order, and exit. In response to the average amount of time exceeding a threshold, the action manager 1022 may determine that the additional lane should be opened, and may either initiate the opening of the additional lane automatically, or may prompt a store manager to open the additional lane. Similarly, the action manager 1022 may initiate or prompt opening the additional lane in response to the total number of customers in the drive through system 10 exceeding a threshold or in response to the stack size exceeding a threshold.
The alert manager 1018 is configured to receive actions or determinations of the action manager 1022, and operate the alert devices 904 to prompt staff to perform the action(s) determined by the action manager 1022. For example, the alert devices 904 may be kitchen alert devices such that the alert manager 1018 initiates preparation of food and/or beverage items in accordance with the determinations of the action manager 1022. Similarly, the alert devices 904 may be alert devices for order takers or staff personnel to open new lanes. In some embodiments, the GUI manager 1016 is configured to operate the manager GUI to prompt the manager to initiate opening the additional lane. The manager of the store may be presented with a request to open the additional lane, and provide via the manager display 914 confirmation to open the additional lane. Responsive to receiving the confirmation from the manager, the alert manager 1018 may operate the alert devices 904 to notify order taking personnel to open the additional lane.
Referring to
The process 1100 includes providing a drive through system including a vision system having a camera and a controller (step 1102), according to some embodiments. The vision system may be installed at a drive through of a store or restaurant. The vision system can include the camera configured to obtain image data of different lanes, pickup windows, order locations, exit lanes, emergency lanes, parking lots, etc. In some embodiments, the vision system includes multiple cameras that are positioned along different lanes of travel. The cameras may be positioned at locations, heights, and orientations as suggested or determined in step 406 of process 400.
The process 1100 includes detecting a vehicle using the image data (step 1104), according to some embodiments. In some embodiments, step 1104 includes obtaining the image data at the controller from the camera. Detecting the vehicle may include identifying a color (e.g., black, red, blue, purple, green, brown, white, etc.) and a type of the vehicle (e.g., compact car, sedan, SUV, truck, etc.). In some embodiments, step 1104 includes using an image analysis technique (e.g., a machine learning technique, an artificial intelligence, a neural network, etc.) in order to predict the color and type of the vehicle in the image data. Step 1104 can also be used to detect a location of the vehicle. In some embodiments, step 1104 includes detecting which of multiple zones along various routes of the drive through system at which the detected vehicle is currently located. In some embodiments, step 1104 is performed simultaneously with steps 1106 and 1108 in real-time in order to track the vehicle's location. Step 1104 may also include detecting a speed or motion of the vehicle. Step 1104 may also include detecting if the vehicle is stopped or parked. In some embodiments step 1104 is performed by the controller (e.g., controller 102).
The process 1100 includes operating a POS unit based on the detection of the vehicle (step 1106), according to some embodiments. In some embodiments, step 1106 includes operating the POS unit of an order taker in order to notify the order taker regarding the detection of the vehicle. For example, the POS unit may be operated to notify the order taker that the vehicle has arrived at an order location. In some embodiments, operating the POS unit includes prompting the order taker to initiate an order process. In some embodiments, operating the POS unit includes providing a notification of the color and type of the vehicle. In some embodiments, operating the POS unit includes providing a graphical representation of the vehicle that visually illustrates both the color and type of the vehicle. Step 1106 may include notifying an order taker to switch to an audio channel to take the customer's order. Step 1106 may be performed by the controller 102.
The process 1100 includes operating a display screen based on the detection of the vehicle (step 1108), according to some embodiments. The display screen may be a display screen positioned within a kitchen of the store or restaurant, a display screen for a store owner or manager, a display screen at a station (e.g., an order take out station), etc. The alert may include providing a graphical representation of all vehicles in the drive through system. The alert may additionally or alternatively include recommendations to open a new lane (e.g., a store manager operation), perform cooking operations (e.g., a kitchen operation), carry out an order to a customer that is parked in a pickup zone (e.g., a take out operation), etc. The alert may additionally or alternatively include providing notifications regarding the color and type of the detected vehicle at a particular location. In some embodiments, the alert includes POS or order data provided simultaneously with at least one of a snapshot of the detected vehicle or the color and type of the detected vehicle. For example, the display screen may be positioned at an order pickup window facing staff that has the responsibility of handing orders (e.g., food and/or beverage items) to customers at the order pickup window. Step 1108 may include operating the display screen to provide both (1) order data (e.g., POS data, order number, order identification, first name of customer, etc.) and (2) vehicle data (e.g., a snapshot of the vehicle, the color and type of the vehicle, etc.). Providing the order data and the vehicle data simultaneously can facilitate improved order fulfillment accuracy and reduce a likelihood that one customer's order is handed to another customer.
POS GUIs
Referring to
Referring to
The lane pane 1202 may be configured to display data corresponding to each of the drive through units 108 and the corresponding vehicles 30 that are present at each of the drive through units 108 (either currently placing an order, or awaiting to place an order). In particular, the lane pane 1202 may display both the color and the type of the car that is present at the drive through units 108 as determined by the controller 102 using the image data. For example, as shown in
Referring to
The GUI 1200 may similarly be provided to fulfillment staff via the pickup alert device 908 (e.g., a display screen, a touch screen, etc.). However, the GUI 1200 as provided to the fulfillment staff may exclude the options for adding items to the customer's order. The GUI 1200 may present both the type and color of the vehicle 30 that is currently present at the corresponding window (e.g., the red truck at window 1) along with order information that is tagged to the characteristics of the vehicle 30 (e.g., the type and color).
Referring to
Referring again to
Tracking GUI
Referring to
Referring particularly to
The GUI 1600 may include menu icons 1604, corresponding to the drive through units 108. In particular, the GUI 1600 may include menu icon 1604a and menu icon 1604b corresponding to two different drive through units 108. The GUI 1600 illustrates which vehicle is currently located at the drive through units 108. The GUI 1600 may also include a pay window icon 1606 and a pickup window icon 1608. The GUI 1600 may visually indicate which of the vehicles 30 are at the pay window 1606 and the pickup window 1608. The GUI 1600 can also include a finish line 1612 that visually indicates which customers are leaving the drive through system 10 after a successful order. In some embodiments, the GUI 1600 also visually illustrates vehicles 30 that have been requested to pull to a pickup location, shown as pickup spots 1614. The GUI 1600 may place vehicle icons 1618 corresponding to vehicles 30 that are currently detected at the pickup spots 1614. In some embodiments, the GUI 1600 also visually illustrates vehicles 30 that are awaiting mobile order pickup, shown as mobile order pickup spots 1616.
Referring still to
Referring to
Referring to
Point of Sale and Kitchen GUIs
Referring to
Referring particularly to
Referring still to
Responsive to selecting the new order button 1816, the user (e.g., the order taker) is presented with a menu, shown in
Referring to
Referring to
Referring to
Responsive to the blue truck arriving at the second window, shown in
Referring to
As shown in
Referring to
It should be understood that any of the GUIs described herein with reference to
Configuration of Exemplary Embodiments
As utilized herein, the terms “approximately,” “about,” “substantially,” and similar terms are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. It should be understood by those of skill in the art who review this disclosure that these terms are intended to allow a description of certain features described and claimed without restricting the scope of these features to the precise numerical ranges provided. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the disclosure as recited in the appended claims.
It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, and/or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples).
The term “coupled,” as used herein, means the joining of two members directly or indirectly to one another. Such joining may be achieved with the two members coupled directly to each other, with the two members coupled to each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled to each other using an intervening member that is integrally formed as a single unitary body with one of the two members. Such members may be coupled mechanically and/or electrically.
The term “or,” as used herein, is used in its inclusive sense (and not in its exclusive sense) so that when used to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is understood to convey that an element may be either X, Y, Z; X and Y; X and Z; Y and Z; or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present, unless otherwise indicated.
References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below,” etc.) are merely used to describe the orientation of various elements in the FIGURES. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.
The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device, etc.) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit and/or the processor) the one or more processes described herein. References to “a” or “the” processor should be understood to encompass a plurality of processors individually or collectively configured to carry out operations as described herein.
The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.
It is important to note that the arrangement of the system as shown in the various exemplary embodiments is illustrative only. All such modifications are intended to be included within the scope of the present disclosure. Other substitutions, modifications, changes, and omissions may be made in the design and arrangement of the exemplary embodiments without departing from the scope of the present disclosure.