Manuals and instructions for everything

why do we use arrays in c

In C, array notation within expressions is always simply pointer arithmetic. All uses of an array identifier in an expression are immediately converted from "array of T" to "pointer to T" and the value is converted to a pointer to the first element of the array. Array notation (e. g. ) is always expanded into pointer arithmetic (e. g. ). However array notation in declarations and definitions is something else entirely. An array declarator describes an "array of
T " type where the values of this type are sequences of elements of type T. A definition of an array object is all about using convenient and easily understood array notation to allocate the appropriate amount of storage for the desired array of objects such that the array identifier refers to this storage without it looking like a pointer. In effect array notation in a declaration or definition is a macro generating an expression using and arithmetic, and in the case of a definition, the equivalent of for auto arrays or its equivalent in the linker for global arrays, and doing it all at compile time (well except for C99 variable-length arrays). The use of array notation in expressions and the use of array types is not so tightly connected, though it is tradition and idiom to use array notation in expressions to reference storage in objects declared and/or defined as arrays.


You can just as easily use array notation with a pointer type in order to make the pointer arithmetic cleaner and more meaningful. Indeed in C an expression of the form. The usual binary conversion are applied to the two operands, and the result is always an lvalue. Since the indirection operator ( ) must have a pointer as its operand, one of must be a pointer and the other must be an integer, but it does not matter which since the initial unary conversion for any "array of T" is to convert it to "pointer to T". Array notation in expressions is in effect a compiler (language-level) macro to generate pointer arithmetic expressions. So really C already does work the way you suggest but you are confusing the use of the notation in two very separate contexts. Time to go back in time for a lesson. While we don't think about these things much in our fancy managed languages today, they are built on the same foundation, so let's look at how memory is managed in C. Before I dive in, a quick explanation of what the term "pointer" means. A pointer is simply a variable that "points" to a location in memory. It doesn't contain the actual value at this area of memory, it contains the memory address to it. Think of a block of memory as a mailbox. The pointer would be the address to that mailbox. In C, an array is simply a pointer with an offset, the offset specifies how far in memory to look.


This provides access time. All other data structures either build upon this, or do not use adjacent memory for storage, resulting in poor random access look up time (Though there are other benefits to not using sequential memory). For example, let's say we have an array with 6 numbers (6,4,2,3,1,5) in it, in memory it would look like this: In an array, we know that each element is next to each other in memory. A C array (Called MyArray here) is simply a pointer to the first element: If we wanted to look up MyArray[4], internally it would be accessed like this: Because we can directly access any element in the array by adding the offset to the pointer, we can look up any element in the same amount of time, regardless of the size of the array. This means that getting MyArray[1000] would take the same amount of time as getting MyArray[5]. An alternative data structure is a linked list. This is a linear list of pointers, each pointing to the next node ======== ======== ======== ======== ======== Data Data Data Data Data - - - - P1 P2 P3 P4 P5 ======== ======== ======== ======== ======== P(X) stands for Pointer to next node. Note that I made each "node" into its own block. This is because they are not guaranteed to be (and most likely won't be) adjacent in memory.


If I want to access P3, I can't directly access it, because I don't know where it is in memory. All I know is where the root (P1) is, so instead I have to start at P1, and follow each pointer to the desired node. This is a O(N) look up time (The look up cost increases as each element is added). It is much more expensive to get to P1000 compared to getting to P4. Higher level data structures, such as hashtables, stacks and queues, all may use an array (or multiple arrays) internally, while Linked Lists and Binary Trees usually use nodes and pointers. You might wonder why anyone would use a data structure that requires linear traversal to look up a value instead of just using an array, but they have their uses. Take our array again. This time, I want to find the array element that holds the value '5'. ===================================== 6 4 2 3 1 5 ===================================== ^ ^ ^ ^ ^ FOUND! In this situation, I don't know what offset to add to the pointer to find it, so I have to start at 0, and work my way up until I find it. This means I have to perform 6 checks. Because of this, searching for a value in an array is considered O(N). The cost of searching increases as the array gets larger. Remember up above where I said that sometimes using a non sequential data structure can have advantages? Searching for data is one of these advantages and one of the best examples is the Binary Tree.


A Binary Tree is a data structure similar to a linked list, however instead of linking to a single node, each node can link to two children nodes. When data is inserted into a binary tree, it uses several rules to decide where to place the new node. The basic concept is that if the new value is greater than the parents, it inserts it to the left, if it is lower, it inserts it to the right. When searching a binary tree for the value of 75, we only need to visit 3 nodes ( O(log N) ) because of this structure: Is 75 less than 100? Look at Right Node Is 75 greater than 50? Look at Left Node There is the 75! Even though there is 5 nodes in our tree, we did not need to look at the remaining two, because we knew that they (and their children) could not possibly contain the value we were looking for. This gives us a search time that at worst case means we have to visit every node, but in the best case we only have to visit a small portion of the nodes. That is where arrays get beat, they provide a constant O(N) search time, despite O(1) access time. This is an incredibly high level overview on data structures in memory, skipping over a lot of details, but hopefully it illustrates an array's strength and weakness compared to other data structures.

  • Views: 65

why does binary search require that an array be sorted
why do we use vlookup in excel
why do we use linq in asp net
why do we use engineering notation rather than scientific notation
why do we need pointers in programming
why do we call a decimal a decimal
why we use string args in main method in java